00:00:00.000 Started by upstream project "autotest-per-patch" build number 126191 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23954 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.033 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.048 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.074 Using shallow fetch with depth 1 00:00:00.074 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.074 > git --version # timeout=10 00:00:00.114 > git --version # 'git version 2.39.2' 00:00:00.114 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.149 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.149 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/71/24171/2 # timeout=5 00:00:03.511 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.523 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.534 Checking out Revision 1e4055c0ee28da4fa0007a72f92a6499a45bf65d (FETCH_HEAD) 00:00:03.534 > git config core.sparsecheckout # timeout=10 00:00:03.546 > git read-tree -mu HEAD # timeout=10 00:00:03.563 > git checkout -f 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=5 00:00:03.588 Commit message: "packer: Drop centos7" 00:00:03.588 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.894 [Pipeline] Start of Pipeline 00:00:03.909 [Pipeline] library 00:00:03.910 Loading library shm_lib@master 00:00:03.910 Library shm_lib@master is cached. Copying from home. 00:00:03.927 [Pipeline] node 00:00:03.936 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:03.938 [Pipeline] { 00:00:03.947 [Pipeline] catchError 00:00:03.948 [Pipeline] { 00:00:03.960 [Pipeline] wrap 00:00:03.970 [Pipeline] { 00:00:03.977 [Pipeline] stage 00:00:03.978 [Pipeline] { (Prologue) 00:00:03.993 [Pipeline] echo 00:00:03.994 Node: VM-host-SM17 00:00:03.998 [Pipeline] cleanWs 00:00:04.004 [WS-CLEANUP] Deleting project workspace... 00:00:04.004 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.010 [WS-CLEANUP] done 00:00:04.169 [Pipeline] setCustomBuildProperty 00:00:04.231 [Pipeline] httpRequest 00:00:04.253 [Pipeline] echo 00:00:04.255 Sorcerer 10.211.164.101 is alive 00:00:04.260 [Pipeline] httpRequest 00:00:04.264 HttpMethod: GET 00:00:04.265 URL: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:04.266 Sending request to url: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:04.277 Response Code: HTTP/1.1 200 OK 00:00:04.277 Success: Status code 200 is in the accepted range: 200,404 00:00:04.278 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:05.065 [Pipeline] sh 00:00:05.336 + tar --no-same-owner -xf jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:05.347 [Pipeline] httpRequest 00:00:05.361 [Pipeline] echo 00:00:05.363 Sorcerer 10.211.164.101 is alive 00:00:05.370 [Pipeline] httpRequest 00:00:05.374 HttpMethod: GET 00:00:05.374 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:05.374 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:05.375 Response Code: HTTP/1.1 200 OK 00:00:05.376 Success: Status code 200 is in the accepted range: 200,404 00:00:05.376 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:27.909 [Pipeline] sh 00:00:28.188 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:31.485 [Pipeline] sh 00:00:31.765 + git -C spdk log --oneline -n5 00:00:31.765 2728651ee accel: adjust task per ch define name 00:00:31.765 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:31.765 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:31.765 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:31.765 719d03c6a sock/uring: only register net impl if supported 00:00:31.790 [Pipeline] writeFile 00:00:31.808 [Pipeline] sh 00:00:32.089 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:32.103 [Pipeline] sh 00:00:32.384 + cat autorun-spdk.conf 00:00:32.384 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.384 SPDK_TEST_NVMF=1 00:00:32.384 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.384 SPDK_TEST_URING=1 00:00:32.384 SPDK_TEST_USDT=1 00:00:32.384 SPDK_RUN_UBSAN=1 00:00:32.384 NET_TYPE=virt 00:00:32.384 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.391 RUN_NIGHTLY=0 00:00:32.393 [Pipeline] } 00:00:32.412 [Pipeline] // stage 00:00:32.430 [Pipeline] stage 00:00:32.433 [Pipeline] { (Run VM) 00:00:32.448 [Pipeline] sh 00:00:32.728 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.728 + echo 'Start stage prepare_nvme.sh' 00:00:32.728 Start stage prepare_nvme.sh 00:00:32.728 + [[ -n 2 ]] 00:00:32.728 + disk_prefix=ex2 00:00:32.728 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 ]] 00:00:32.728 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf ]] 00:00:32.728 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf 00:00:32.728 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.728 ++ SPDK_TEST_NVMF=1 00:00:32.728 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.728 ++ SPDK_TEST_URING=1 00:00:32.728 ++ SPDK_TEST_USDT=1 00:00:32.728 ++ SPDK_RUN_UBSAN=1 00:00:32.728 ++ NET_TYPE=virt 00:00:32.728 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.728 ++ RUN_NIGHTLY=0 00:00:32.728 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:32.728 + nvme_files=() 00:00:32.728 + declare -A nvme_files 00:00:32.728 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.728 + nvme_files['nvme.img']=5G 00:00:32.728 + nvme_files['nvme-cmb.img']=5G 00:00:32.728 + nvme_files['nvme-multi0.img']=4G 00:00:32.728 + nvme_files['nvme-multi1.img']=4G 00:00:32.728 + nvme_files['nvme-multi2.img']=4G 00:00:32.728 + nvme_files['nvme-openstack.img']=8G 00:00:32.728 + nvme_files['nvme-zns.img']=5G 00:00:32.728 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.728 + (( SPDK_TEST_FTL == 1 )) 00:00:32.728 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.728 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.728 + for nvme in "${!nvme_files[@]}" 00:00:32.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:32.728 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.728 + for nvme in "${!nvme_files[@]}" 00:00:32.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:32.728 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.728 + for nvme in "${!nvme_files[@]}" 00:00:32.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:32.728 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.728 + for nvme in "${!nvme_files[@]}" 00:00:32.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:32.728 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.728 + for nvme in "${!nvme_files[@]}" 00:00:32.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:32.728 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.728 + for nvme in "${!nvme_files[@]}" 00:00:32.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:32.728 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.728 + for nvme in "${!nvme_files[@]}" 00:00:32.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:32.994 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.994 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:32.994 + echo 'End stage prepare_nvme.sh' 00:00:32.994 End stage prepare_nvme.sh 00:00:33.007 [Pipeline] sh 00:00:33.285 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.285 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:00:33.285 00:00:33.285 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant 00:00:33.285 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk 00:00:33.285 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:33.285 HELP=0 00:00:33.285 DRY_RUN=0 00:00:33.285 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:33.285 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.285 NVME_AUTO_CREATE=0 00:00:33.285 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:33.285 NVME_CMB=,, 00:00:33.285 NVME_PMR=,, 00:00:33.285 NVME_ZNS=,, 00:00:33.285 NVME_MS=,, 00:00:33.285 NVME_FDP=,, 00:00:33.285 SPDK_VAGRANT_DISTRO=fedora38 00:00:33.285 SPDK_VAGRANT_VMCPU=10 00:00:33.285 SPDK_VAGRANT_VMRAM=12288 00:00:33.285 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.285 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.285 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.285 SPDK_OPENSTACK_NETWORK=0 00:00:33.285 VAGRANT_PACKAGE_BOX=0 00:00:33.285 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:00:33.285 FORCE_DISTRO=true 00:00:33.285 VAGRANT_BOX_VERSION= 00:00:33.285 EXTRA_VAGRANTFILES= 00:00:33.285 NIC_MODEL=e1000 00:00:33.285 00:00:33.285 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt' 00:00:33.285 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:36.570 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.827 ==> default: Creating image (snapshot of base box volume). 00:00:37.085 ==> default: Creating domain with the following settings... 00:00:37.085 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721047372_8e46c07352b8c3c95405 00:00:37.085 ==> default: -- Domain type: kvm 00:00:37.085 ==> default: -- Cpus: 10 00:00:37.085 ==> default: -- Feature: acpi 00:00:37.085 ==> default: -- Feature: apic 00:00:37.085 ==> default: -- Feature: pae 00:00:37.085 ==> default: -- Memory: 12288M 00:00:37.085 ==> default: -- Memory Backing: hugepages: 00:00:37.085 ==> default: -- Management MAC: 00:00:37.085 ==> default: -- Loader: 00:00:37.085 ==> default: -- Nvram: 00:00:37.085 ==> default: -- Base box: spdk/fedora38 00:00:37.085 ==> default: -- Storage pool: default 00:00:37.085 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721047372_8e46c07352b8c3c95405.img (20G) 00:00:37.085 ==> default: -- Volume Cache: default 00:00:37.085 ==> default: -- Kernel: 00:00:37.085 ==> default: -- Initrd: 00:00:37.085 ==> default: -- Graphics Type: vnc 00:00:37.085 ==> default: -- Graphics Port: -1 00:00:37.085 ==> default: -- Graphics IP: 127.0.0.1 00:00:37.085 ==> default: -- Graphics Password: Not defined 00:00:37.085 ==> default: -- Video Type: cirrus 00:00:37.085 ==> default: -- Video VRAM: 9216 00:00:37.085 ==> default: -- Sound Type: 00:00:37.085 ==> default: -- Keymap: en-us 00:00:37.085 ==> default: -- TPM Path: 00:00:37.085 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:37.085 ==> default: -- Command line args: 00:00:37.085 ==> default: -> value=-device, 00:00:37.085 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:37.085 ==> default: -> value=-drive, 00:00:37.085 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:37.085 ==> default: -> value=-device, 00:00:37.085 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.085 ==> default: -> value=-device, 00:00:37.085 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:37.085 ==> default: -> value=-drive, 00:00:37.085 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:37.085 ==> default: -> value=-device, 00:00:37.085 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.085 ==> default: -> value=-drive, 00:00:37.085 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:37.085 ==> default: -> value=-device, 00:00:37.085 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.085 ==> default: -> value=-drive, 00:00:37.085 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:37.085 ==> default: -> value=-device, 00:00:37.085 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.085 ==> default: Creating shared folders metadata... 00:00:37.085 ==> default: Starting domain. 00:00:39.019 ==> default: Waiting for domain to get an IP address... 00:00:53.936 ==> default: Waiting for SSH to become available... 00:00:55.309 ==> default: Configuring and enabling network interfaces... 00:00:59.499 default: SSH address: 192.168.121.163:22 00:00:59.499 default: SSH username: vagrant 00:00:59.499 default: SSH auth method: private key 00:01:02.029 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.139 ==> default: Mounting SSHFS shared folder... 00:01:10.707 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.707 ==> default: Checking Mount.. 00:01:12.084 ==> default: Folder Successfully Mounted! 00:01:12.084 ==> default: Running provisioner: file... 00:01:13.021 default: ~/.gitconfig => .gitconfig 00:01:13.281 00:01:13.281 SUCCESS! 00:01:13.281 00:01:13.281 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:01:13.281 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:13.281 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:01:13.281 00:01:13.292 [Pipeline] } 00:01:13.314 [Pipeline] // stage 00:01:13.325 [Pipeline] dir 00:01:13.326 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt 00:01:13.328 [Pipeline] { 00:01:13.344 [Pipeline] catchError 00:01:13.346 [Pipeline] { 00:01:13.362 [Pipeline] sh 00:01:13.641 + vagrant ssh-config --host vagrant 00:01:13.641 + sed -ne /^Host/,$p 00:01:13.641 + tee ssh_conf 00:01:16.981 Host vagrant 00:01:16.982 HostName 192.168.121.163 00:01:16.982 User vagrant 00:01:16.982 Port 22 00:01:16.982 UserKnownHostsFile /dev/null 00:01:16.982 StrictHostKeyChecking no 00:01:16.982 PasswordAuthentication no 00:01:16.982 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:16.982 IdentitiesOnly yes 00:01:16.982 LogLevel FATAL 00:01:16.982 ForwardAgent yes 00:01:16.982 ForwardX11 yes 00:01:16.982 00:01:16.996 [Pipeline] withEnv 00:01:16.999 [Pipeline] { 00:01:17.017 [Pipeline] sh 00:01:17.292 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.292 source /etc/os-release 00:01:17.292 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.292 # Minimal, systemd-like check. 00:01:17.292 if [[ -e /.dockerenv ]]; then 00:01:17.292 # Clear garbage from the node's name: 00:01:17.292 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.292 # $HOSTNAME is the actual container id 00:01:17.292 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.292 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.292 # We can assume this is a mount from a host where container is running, 00:01:17.292 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.292 container="$(< /etc/hostname) ($agent)" 00:01:17.292 else 00:01:17.292 # Fallback 00:01:17.292 container=$agent 00:01:17.292 fi 00:01:17.292 fi 00:01:17.292 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.292 00:01:17.300 [Pipeline] } 00:01:17.315 [Pipeline] // withEnv 00:01:17.320 [Pipeline] setCustomBuildProperty 00:01:17.332 [Pipeline] stage 00:01:17.333 [Pipeline] { (Tests) 00:01:17.346 [Pipeline] sh 00:01:17.619 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.655 [Pipeline] sh 00:01:17.928 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:17.941 [Pipeline] timeout 00:01:17.941 Timeout set to expire in 30 min 00:01:17.943 [Pipeline] { 00:01:17.956 [Pipeline] sh 00:01:18.295 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:18.860 HEAD is now at 2728651ee accel: adjust task per ch define name 00:01:18.869 [Pipeline] sh 00:01:19.142 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.413 [Pipeline] sh 00:01:19.687 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:19.704 [Pipeline] sh 00:01:19.981 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:20.239 ++ readlink -f spdk_repo 00:01:20.239 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:20.239 + [[ -n /home/vagrant/spdk_repo ]] 00:01:20.239 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:20.239 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:20.239 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:20.239 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:20.239 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:20.239 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:20.239 + cd /home/vagrant/spdk_repo 00:01:20.239 + source /etc/os-release 00:01:20.239 ++ NAME='Fedora Linux' 00:01:20.239 ++ VERSION='38 (Cloud Edition)' 00:01:20.239 ++ ID=fedora 00:01:20.239 ++ VERSION_ID=38 00:01:20.239 ++ VERSION_CODENAME= 00:01:20.239 ++ PLATFORM_ID=platform:f38 00:01:20.239 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:20.239 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.239 ++ LOGO=fedora-logo-icon 00:01:20.239 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:20.239 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.239 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:20.239 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.239 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.239 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.239 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:20.239 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.239 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:20.239 ++ SUPPORT_END=2024-05-14 00:01:20.239 ++ VARIANT='Cloud Edition' 00:01:20.239 ++ VARIANT_ID=cloud 00:01:20.239 + uname -a 00:01:20.239 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.239 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:20.497 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:20.497 Hugepages 00:01:20.497 node hugesize free / total 00:01:20.755 node0 1048576kB 0 / 0 00:01:20.755 node0 2048kB 0 / 0 00:01:20.755 00:01:20.755 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.755 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:20.755 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:20.755 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:20.755 + rm -f /tmp/spdk-ld-path 00:01:20.755 + source autorun-spdk.conf 00:01:20.755 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.755 ++ SPDK_TEST_NVMF=1 00:01:20.755 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.755 ++ SPDK_TEST_URING=1 00:01:20.755 ++ SPDK_TEST_USDT=1 00:01:20.755 ++ SPDK_RUN_UBSAN=1 00:01:20.755 ++ NET_TYPE=virt 00:01:20.755 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.755 ++ RUN_NIGHTLY=0 00:01:20.755 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.755 + [[ -n '' ]] 00:01:20.755 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:20.755 + for M in /var/spdk/build-*-manifest.txt 00:01:20.755 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.755 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.755 + for M in /var/spdk/build-*-manifest.txt 00:01:20.755 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.755 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.755 ++ uname 00:01:20.755 + [[ Linux == \L\i\n\u\x ]] 00:01:20.755 + sudo dmesg -T 00:01:20.755 + sudo dmesg --clear 00:01:20.755 + dmesg_pid=5097 00:01:20.755 + sudo dmesg -Tw 00:01:20.755 + [[ Fedora Linux == FreeBSD ]] 00:01:20.755 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.755 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.755 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.755 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.755 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.755 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.755 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.755 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.755 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.755 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.755 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.755 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.755 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.755 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.755 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.755 Test configuration: 00:01:20.755 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.755 SPDK_TEST_NVMF=1 00:01:20.755 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.755 SPDK_TEST_URING=1 00:01:20.755 SPDK_TEST_USDT=1 00:01:20.755 SPDK_RUN_UBSAN=1 00:01:20.755 NET_TYPE=virt 00:01:20.755 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.014 RUN_NIGHTLY=0 12:43:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:21.014 12:43:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.014 12:43:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.014 12:43:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.014 12:43:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.014 12:43:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.014 12:43:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.014 12:43:36 -- paths/export.sh@5 -- $ export PATH 00:01:21.014 12:43:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.014 12:43:36 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:21.014 12:43:36 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:21.014 12:43:36 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721047416.XXXXXX 00:01:21.014 12:43:36 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721047416.5mMMJu 00:01:21.014 12:43:36 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:21.014 12:43:36 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:21.014 12:43:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:21.014 12:43:36 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:21.014 12:43:36 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.014 12:43:36 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:21.014 12:43:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:21.014 12:43:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.014 12:43:36 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:21.014 12:43:36 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:21.014 12:43:36 -- pm/common@17 -- $ local monitor 00:01:21.014 12:43:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.014 12:43:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.014 12:43:36 -- pm/common@25 -- $ sleep 1 00:01:21.014 12:43:36 -- pm/common@21 -- $ date +%s 00:01:21.014 12:43:36 -- pm/common@21 -- $ date +%s 00:01:21.014 12:43:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721047416 00:01:21.014 12:43:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721047416 00:01:21.014 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721047416_collect-vmstat.pm.log 00:01:21.014 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721047416_collect-cpu-load.pm.log 00:01:21.974 12:43:37 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:21.974 12:43:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.974 12:43:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.974 12:43:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.974 12:43:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.974 Mon Jul 15 12:43:37 PM UTC 2024 00:01:21.974 12:43:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.974 v24.09-pre-206-g2728651ee 00:01:21.974 12:43:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.974 12:43:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.974 12:43:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.974 12:43:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:21.974 12:43:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.974 12:43:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.974 ************************************ 00:01:21.974 START TEST ubsan 00:01:21.974 ************************************ 00:01:21.974 using ubsan 00:01:21.975 12:43:37 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:21.975 00:01:21.975 real 0m0.000s 00:01:21.975 user 0m0.000s 00:01:21.975 sys 0m0.000s 00:01:21.975 12:43:37 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:21.975 12:43:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.975 ************************************ 00:01:21.975 END TEST ubsan 00:01:21.975 ************************************ 00:01:21.975 12:43:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:21.975 12:43:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.975 12:43:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.975 12:43:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.975 12:43:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.975 12:43:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.975 12:43:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.975 12:43:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.975 12:43:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.975 12:43:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:22.233 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:22.233 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:22.492 Using 'verbs' RDMA provider 00:01:35.665 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:50.526 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:50.526 Creating mk/config.mk...done. 00:01:50.526 Creating mk/cc.flags.mk...done. 00:01:50.526 Type 'make' to build. 00:01:50.526 12:44:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:50.526 12:44:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:50.526 12:44:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:50.526 12:44:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.526 ************************************ 00:01:50.526 START TEST make 00:01:50.526 ************************************ 00:01:50.526 12:44:04 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:50.526 make[1]: Nothing to be done for 'all'. 00:02:00.493 The Meson build system 00:02:00.493 Version: 1.3.1 00:02:00.493 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:00.493 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:00.493 Build type: native build 00:02:00.493 Program cat found: YES (/usr/bin/cat) 00:02:00.493 Project name: DPDK 00:02:00.493 Project version: 24.03.0 00:02:00.493 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.493 C linker for the host machine: cc ld.bfd 2.39-16 00:02:00.493 Host machine cpu family: x86_64 00:02:00.493 Host machine cpu: x86_64 00:02:00.493 Message: ## Building in Developer Mode ## 00:02:00.493 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.493 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.493 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.493 Program python3 found: YES (/usr/bin/python3) 00:02:00.493 Program cat found: YES (/usr/bin/cat) 00:02:00.493 Compiler for C supports arguments -march=native: YES 00:02:00.493 Checking for size of "void *" : 8 00:02:00.493 Checking for size of "void *" : 8 (cached) 00:02:00.493 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:00.493 Library m found: YES 00:02:00.493 Library numa found: YES 00:02:00.493 Has header "numaif.h" : YES 00:02:00.493 Library fdt found: NO 00:02:00.493 Library execinfo found: NO 00:02:00.493 Has header "execinfo.h" : YES 00:02:00.493 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.493 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.493 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.493 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.493 Run-time dependency openssl found: YES 3.0.9 00:02:00.493 Run-time dependency libpcap found: YES 1.10.4 00:02:00.493 Has header "pcap.h" with dependency libpcap: YES 00:02:00.493 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.493 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.493 Compiler for C supports arguments -Wformat: YES 00:02:00.493 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.493 Compiler for C supports arguments -Wformat-security: NO 00:02:00.493 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.493 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.493 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.493 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.493 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.493 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.493 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.493 Compiler for C supports arguments -Wundef: YES 00:02:00.493 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.493 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.493 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.493 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.493 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.493 Program objdump found: YES (/usr/bin/objdump) 00:02:00.493 Compiler for C supports arguments -mavx512f: YES 00:02:00.493 Checking if "AVX512 checking" compiles: YES 00:02:00.493 Fetching value of define "__SSE4_2__" : 1 00:02:00.493 Fetching value of define "__AES__" : 1 00:02:00.493 Fetching value of define "__AVX__" : 1 00:02:00.493 Fetching value of define "__AVX2__" : 1 00:02:00.493 Fetching value of define "__AVX512BW__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512CD__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512F__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512VL__" : (undefined) 00:02:00.493 Fetching value of define "__PCLMUL__" : 1 00:02:00.493 Fetching value of define "__RDRND__" : 1 00:02:00.493 Fetching value of define "__RDSEED__" : 1 00:02:00.493 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.493 Fetching value of define "__znver1__" : (undefined) 00:02:00.493 Fetching value of define "__znver2__" : (undefined) 00:02:00.493 Fetching value of define "__znver3__" : (undefined) 00:02:00.493 Fetching value of define "__znver4__" : (undefined) 00:02:00.493 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.493 Message: lib/log: Defining dependency "log" 00:02:00.493 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.493 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.493 Checking for function "getentropy" : NO 00:02:00.493 Message: lib/eal: Defining dependency "eal" 00:02:00.493 Message: lib/ring: Defining dependency "ring" 00:02:00.493 Message: lib/rcu: Defining dependency "rcu" 00:02:00.493 Message: lib/mempool: Defining dependency "mempool" 00:02:00.493 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.493 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.493 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.493 Compiler for C supports arguments -mpclmul: YES 00:02:00.493 Compiler for C supports arguments -maes: YES 00:02:00.493 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.493 Compiler for C supports arguments -mavx512bw: YES 00:02:00.493 Compiler for C supports arguments -mavx512dq: YES 00:02:00.493 Compiler for C supports arguments -mavx512vl: YES 00:02:00.493 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.493 Compiler for C supports arguments -mavx2: YES 00:02:00.493 Compiler for C supports arguments -mavx: YES 00:02:00.493 Message: lib/net: Defining dependency "net" 00:02:00.493 Message: lib/meter: Defining dependency "meter" 00:02:00.493 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.493 Message: lib/pci: Defining dependency "pci" 00:02:00.493 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.493 Message: lib/hash: Defining dependency "hash" 00:02:00.493 Message: lib/timer: Defining dependency "timer" 00:02:00.493 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.493 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.493 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.493 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.493 Message: lib/power: Defining dependency "power" 00:02:00.493 Message: lib/reorder: Defining dependency "reorder" 00:02:00.493 Message: lib/security: Defining dependency "security" 00:02:00.493 Has header "linux/userfaultfd.h" : YES 00:02:00.493 Has header "linux/vduse.h" : YES 00:02:00.493 Message: lib/vhost: Defining dependency "vhost" 00:02:00.493 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.493 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.493 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.493 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.493 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.493 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.493 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.493 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.493 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.493 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.493 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.493 Configuring doxy-api-html.conf using configuration 00:02:00.493 Configuring doxy-api-man.conf using configuration 00:02:00.493 Program mandb found: YES (/usr/bin/mandb) 00:02:00.493 Program sphinx-build found: NO 00:02:00.493 Configuring rte_build_config.h using configuration 00:02:00.493 Message: 00:02:00.493 ================= 00:02:00.493 Applications Enabled 00:02:00.493 ================= 00:02:00.493 00:02:00.493 apps: 00:02:00.493 00:02:00.493 00:02:00.493 Message: 00:02:00.493 ================= 00:02:00.493 Libraries Enabled 00:02:00.493 ================= 00:02:00.493 00:02:00.493 libs: 00:02:00.493 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.493 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.493 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.493 00:02:00.493 Message: 00:02:00.493 =============== 00:02:00.493 Drivers Enabled 00:02:00.493 =============== 00:02:00.493 00:02:00.493 common: 00:02:00.493 00:02:00.493 bus: 00:02:00.493 pci, vdev, 00:02:00.493 mempool: 00:02:00.493 ring, 00:02:00.493 dma: 00:02:00.493 00:02:00.493 net: 00:02:00.493 00:02:00.493 crypto: 00:02:00.493 00:02:00.493 compress: 00:02:00.493 00:02:00.493 vdpa: 00:02:00.493 00:02:00.493 00:02:00.493 Message: 00:02:00.493 ================= 00:02:00.493 Content Skipped 00:02:00.493 ================= 00:02:00.493 00:02:00.493 apps: 00:02:00.493 dumpcap: explicitly disabled via build config 00:02:00.493 graph: explicitly disabled via build config 00:02:00.493 pdump: explicitly disabled via build config 00:02:00.493 proc-info: explicitly disabled via build config 00:02:00.493 test-acl: explicitly disabled via build config 00:02:00.493 test-bbdev: explicitly disabled via build config 00:02:00.493 test-cmdline: explicitly disabled via build config 00:02:00.493 test-compress-perf: explicitly disabled via build config 00:02:00.493 test-crypto-perf: explicitly disabled via build config 00:02:00.493 test-dma-perf: explicitly disabled via build config 00:02:00.493 test-eventdev: explicitly disabled via build config 00:02:00.493 test-fib: explicitly disabled via build config 00:02:00.493 test-flow-perf: explicitly disabled via build config 00:02:00.493 test-gpudev: explicitly disabled via build config 00:02:00.493 test-mldev: explicitly disabled via build config 00:02:00.493 test-pipeline: explicitly disabled via build config 00:02:00.493 test-pmd: explicitly disabled via build config 00:02:00.493 test-regex: explicitly disabled via build config 00:02:00.493 test-sad: explicitly disabled via build config 00:02:00.493 test-security-perf: explicitly disabled via build config 00:02:00.493 00:02:00.493 libs: 00:02:00.493 argparse: explicitly disabled via build config 00:02:00.493 metrics: explicitly disabled via build config 00:02:00.493 acl: explicitly disabled via build config 00:02:00.493 bbdev: explicitly disabled via build config 00:02:00.493 bitratestats: explicitly disabled via build config 00:02:00.493 bpf: explicitly disabled via build config 00:02:00.493 cfgfile: explicitly disabled via build config 00:02:00.493 distributor: explicitly disabled via build config 00:02:00.493 efd: explicitly disabled via build config 00:02:00.493 eventdev: explicitly disabled via build config 00:02:00.493 dispatcher: explicitly disabled via build config 00:02:00.493 gpudev: explicitly disabled via build config 00:02:00.493 gro: explicitly disabled via build config 00:02:00.493 gso: explicitly disabled via build config 00:02:00.493 ip_frag: explicitly disabled via build config 00:02:00.493 jobstats: explicitly disabled via build config 00:02:00.493 latencystats: explicitly disabled via build config 00:02:00.493 lpm: explicitly disabled via build config 00:02:00.493 member: explicitly disabled via build config 00:02:00.493 pcapng: explicitly disabled via build config 00:02:00.493 rawdev: explicitly disabled via build config 00:02:00.493 regexdev: explicitly disabled via build config 00:02:00.493 mldev: explicitly disabled via build config 00:02:00.493 rib: explicitly disabled via build config 00:02:00.493 sched: explicitly disabled via build config 00:02:00.493 stack: explicitly disabled via build config 00:02:00.493 ipsec: explicitly disabled via build config 00:02:00.493 pdcp: explicitly disabled via build config 00:02:00.493 fib: explicitly disabled via build config 00:02:00.493 port: explicitly disabled via build config 00:02:00.493 pdump: explicitly disabled via build config 00:02:00.493 table: explicitly disabled via build config 00:02:00.493 pipeline: explicitly disabled via build config 00:02:00.494 graph: explicitly disabled via build config 00:02:00.494 node: explicitly disabled via build config 00:02:00.494 00:02:00.494 drivers: 00:02:00.494 common/cpt: not in enabled drivers build config 00:02:00.494 common/dpaax: not in enabled drivers build config 00:02:00.494 common/iavf: not in enabled drivers build config 00:02:00.494 common/idpf: not in enabled drivers build config 00:02:00.494 common/ionic: not in enabled drivers build config 00:02:00.494 common/mvep: not in enabled drivers build config 00:02:00.494 common/octeontx: not in enabled drivers build config 00:02:00.494 bus/auxiliary: not in enabled drivers build config 00:02:00.494 bus/cdx: not in enabled drivers build config 00:02:00.494 bus/dpaa: not in enabled drivers build config 00:02:00.494 bus/fslmc: not in enabled drivers build config 00:02:00.494 bus/ifpga: not in enabled drivers build config 00:02:00.494 bus/platform: not in enabled drivers build config 00:02:00.494 bus/uacce: not in enabled drivers build config 00:02:00.494 bus/vmbus: not in enabled drivers build config 00:02:00.494 common/cnxk: not in enabled drivers build config 00:02:00.494 common/mlx5: not in enabled drivers build config 00:02:00.494 common/nfp: not in enabled drivers build config 00:02:00.494 common/nitrox: not in enabled drivers build config 00:02:00.494 common/qat: not in enabled drivers build config 00:02:00.494 common/sfc_efx: not in enabled drivers build config 00:02:00.494 mempool/bucket: not in enabled drivers build config 00:02:00.494 mempool/cnxk: not in enabled drivers build config 00:02:00.494 mempool/dpaa: not in enabled drivers build config 00:02:00.494 mempool/dpaa2: not in enabled drivers build config 00:02:00.494 mempool/octeontx: not in enabled drivers build config 00:02:00.494 mempool/stack: not in enabled drivers build config 00:02:00.494 dma/cnxk: not in enabled drivers build config 00:02:00.494 dma/dpaa: not in enabled drivers build config 00:02:00.494 dma/dpaa2: not in enabled drivers build config 00:02:00.494 dma/hisilicon: not in enabled drivers build config 00:02:00.494 dma/idxd: not in enabled drivers build config 00:02:00.494 dma/ioat: not in enabled drivers build config 00:02:00.494 dma/skeleton: not in enabled drivers build config 00:02:00.494 net/af_packet: not in enabled drivers build config 00:02:00.494 net/af_xdp: not in enabled drivers build config 00:02:00.494 net/ark: not in enabled drivers build config 00:02:00.494 net/atlantic: not in enabled drivers build config 00:02:00.494 net/avp: not in enabled drivers build config 00:02:00.494 net/axgbe: not in enabled drivers build config 00:02:00.494 net/bnx2x: not in enabled drivers build config 00:02:00.494 net/bnxt: not in enabled drivers build config 00:02:00.494 net/bonding: not in enabled drivers build config 00:02:00.494 net/cnxk: not in enabled drivers build config 00:02:00.494 net/cpfl: not in enabled drivers build config 00:02:00.494 net/cxgbe: not in enabled drivers build config 00:02:00.494 net/dpaa: not in enabled drivers build config 00:02:00.494 net/dpaa2: not in enabled drivers build config 00:02:00.494 net/e1000: not in enabled drivers build config 00:02:00.494 net/ena: not in enabled drivers build config 00:02:00.494 net/enetc: not in enabled drivers build config 00:02:00.494 net/enetfec: not in enabled drivers build config 00:02:00.494 net/enic: not in enabled drivers build config 00:02:00.494 net/failsafe: not in enabled drivers build config 00:02:00.494 net/fm10k: not in enabled drivers build config 00:02:00.494 net/gve: not in enabled drivers build config 00:02:00.494 net/hinic: not in enabled drivers build config 00:02:00.494 net/hns3: not in enabled drivers build config 00:02:00.494 net/i40e: not in enabled drivers build config 00:02:00.494 net/iavf: not in enabled drivers build config 00:02:00.494 net/ice: not in enabled drivers build config 00:02:00.494 net/idpf: not in enabled drivers build config 00:02:00.494 net/igc: not in enabled drivers build config 00:02:00.494 net/ionic: not in enabled drivers build config 00:02:00.494 net/ipn3ke: not in enabled drivers build config 00:02:00.494 net/ixgbe: not in enabled drivers build config 00:02:00.494 net/mana: not in enabled drivers build config 00:02:00.494 net/memif: not in enabled drivers build config 00:02:00.494 net/mlx4: not in enabled drivers build config 00:02:00.494 net/mlx5: not in enabled drivers build config 00:02:00.494 net/mvneta: not in enabled drivers build config 00:02:00.494 net/mvpp2: not in enabled drivers build config 00:02:00.494 net/netvsc: not in enabled drivers build config 00:02:00.494 net/nfb: not in enabled drivers build config 00:02:00.494 net/nfp: not in enabled drivers build config 00:02:00.494 net/ngbe: not in enabled drivers build config 00:02:00.494 net/null: not in enabled drivers build config 00:02:00.494 net/octeontx: not in enabled drivers build config 00:02:00.494 net/octeon_ep: not in enabled drivers build config 00:02:00.494 net/pcap: not in enabled drivers build config 00:02:00.494 net/pfe: not in enabled drivers build config 00:02:00.494 net/qede: not in enabled drivers build config 00:02:00.494 net/ring: not in enabled drivers build config 00:02:00.494 net/sfc: not in enabled drivers build config 00:02:00.494 net/softnic: not in enabled drivers build config 00:02:00.494 net/tap: not in enabled drivers build config 00:02:00.494 net/thunderx: not in enabled drivers build config 00:02:00.494 net/txgbe: not in enabled drivers build config 00:02:00.494 net/vdev_netvsc: not in enabled drivers build config 00:02:00.494 net/vhost: not in enabled drivers build config 00:02:00.494 net/virtio: not in enabled drivers build config 00:02:00.494 net/vmxnet3: not in enabled drivers build config 00:02:00.494 raw/*: missing internal dependency, "rawdev" 00:02:00.494 crypto/armv8: not in enabled drivers build config 00:02:00.494 crypto/bcmfs: not in enabled drivers build config 00:02:00.494 crypto/caam_jr: not in enabled drivers build config 00:02:00.494 crypto/ccp: not in enabled drivers build config 00:02:00.494 crypto/cnxk: not in enabled drivers build config 00:02:00.494 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.494 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.494 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.494 crypto/mlx5: not in enabled drivers build config 00:02:00.494 crypto/mvsam: not in enabled drivers build config 00:02:00.494 crypto/nitrox: not in enabled drivers build config 00:02:00.494 crypto/null: not in enabled drivers build config 00:02:00.494 crypto/octeontx: not in enabled drivers build config 00:02:00.494 crypto/openssl: not in enabled drivers build config 00:02:00.494 crypto/scheduler: not in enabled drivers build config 00:02:00.494 crypto/uadk: not in enabled drivers build config 00:02:00.494 crypto/virtio: not in enabled drivers build config 00:02:00.494 compress/isal: not in enabled drivers build config 00:02:00.494 compress/mlx5: not in enabled drivers build config 00:02:00.494 compress/nitrox: not in enabled drivers build config 00:02:00.494 compress/octeontx: not in enabled drivers build config 00:02:00.494 compress/zlib: not in enabled drivers build config 00:02:00.494 regex/*: missing internal dependency, "regexdev" 00:02:00.494 ml/*: missing internal dependency, "mldev" 00:02:00.494 vdpa/ifc: not in enabled drivers build config 00:02:00.494 vdpa/mlx5: not in enabled drivers build config 00:02:00.494 vdpa/nfp: not in enabled drivers build config 00:02:00.494 vdpa/sfc: not in enabled drivers build config 00:02:00.494 event/*: missing internal dependency, "eventdev" 00:02:00.494 baseband/*: missing internal dependency, "bbdev" 00:02:00.494 gpu/*: missing internal dependency, "gpudev" 00:02:00.494 00:02:00.494 00:02:00.494 Build targets in project: 85 00:02:00.494 00:02:00.494 DPDK 24.03.0 00:02:00.494 00:02:00.494 User defined options 00:02:00.494 buildtype : debug 00:02:00.494 default_library : shared 00:02:00.494 libdir : lib 00:02:00.494 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:00.494 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.494 c_link_args : 00:02:00.494 cpu_instruction_set: native 00:02:00.494 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:00.494 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:00.494 enable_docs : false 00:02:00.494 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.494 enable_kmods : false 00:02:00.494 max_lcores : 128 00:02:00.494 tests : false 00:02:00.494 00:02:00.494 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.494 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:00.751 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.751 [2/268] Linking static target lib/librte_kvargs.a 00:02:00.751 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.751 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.751 [5/268] Linking static target lib/librte_log.a 00:02:00.751 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.009 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.266 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.266 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.522 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.522 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.522 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.522 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.522 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.522 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.522 [16/268] Linking static target lib/librte_telemetry.a 00:02:01.522 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.785 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.785 [19/268] Linking target lib/librte_log.so.24.1 00:02:01.785 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.051 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:02.051 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:02.051 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.309 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.309 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:02.309 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:02.309 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.568 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.568 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.568 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.568 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.568 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:02.568 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:02.827 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.827 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.827 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.827 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.085 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.086 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.086 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.343 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.343 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.343 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.343 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.600 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.600 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.600 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.600 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.600 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.858 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.858 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.117 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.117 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.117 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.375 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.375 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.375 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.633 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.633 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.633 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.633 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.633 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.891 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.891 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.149 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.149 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.149 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.406 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.407 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.407 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.663 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.663 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.663 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.920 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.920 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.920 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.920 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.179 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.179 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.179 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.437 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.696 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.696 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.696 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.696 [85/268] Linking static target lib/librte_eal.a 00:02:06.696 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.696 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.953 [88/268] Linking static target lib/librte_ring.a 00:02:06.953 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.953 [90/268] Linking static target lib/librte_rcu.a 00:02:06.953 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.953 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.953 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.210 [94/268] Linking static target lib/librte_mempool.a 00:02:07.210 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.468 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.468 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.468 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.468 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.468 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.468 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.468 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.468 [103/268] Linking static target lib/librte_mbuf.a 00:02:07.724 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.980 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.980 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.980 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.980 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.980 [109/268] Linking static target lib/librte_meter.a 00:02:08.237 [110/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.237 [111/268] Linking static target lib/librte_net.a 00:02:08.237 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.570 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.570 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.570 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.570 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.570 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.853 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.853 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.134 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.134 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.391 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.648 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.649 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.649 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.649 [126/268] Linking static target lib/librte_pci.a 00:02:09.649 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.907 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.907 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.907 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.907 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.907 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.907 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.907 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.907 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.907 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.907 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.907 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:10.174 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.174 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.174 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.174 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.174 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:10.174 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.456 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.456 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.456 [147/268] Linking static target lib/librte_cmdline.a 00:02:10.456 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.737 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.737 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:10.737 [151/268] Linking static target lib/librte_ethdev.a 00:02:11.013 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.013 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:11.013 [154/268] Linking static target lib/librte_timer.a 00:02:11.013 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.013 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.013 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.013 [158/268] Linking static target lib/librte_hash.a 00:02:11.013 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.599 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.599 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.599 [162/268] Linking static target lib/librte_compressdev.a 00:02:11.599 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.599 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.861 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.861 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.861 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.120 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.120 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.120 [170/268] Linking static target lib/librte_dmadev.a 00:02:12.120 [171/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.120 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.121 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.378 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.378 [175/268] Linking static target lib/librte_cryptodev.a 00:02:12.378 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.635 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.635 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.635 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.892 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.892 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.892 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.892 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.892 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.148 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.148 [186/268] Linking static target lib/librte_power.a 00:02:13.412 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.412 [188/268] Linking static target lib/librte_reorder.a 00:02:13.412 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.412 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.412 [191/268] Linking static target lib/librte_security.a 00:02:13.412 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.672 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.672 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.672 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.929 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.186 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.186 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.186 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.508 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.508 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.508 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.783 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.783 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.783 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.783 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.783 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:14.783 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.040 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.040 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.040 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.040 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.040 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.040 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.040 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.040 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.040 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.040 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.040 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.298 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:15.298 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.298 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.298 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.557 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.557 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.557 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.557 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.557 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.494 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.494 [230/268] Linking static target lib/librte_vhost.a 00:02:17.060 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.318 [232/268] Linking target lib/librte_eal.so.24.1 00:02:17.318 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.318 [234/268] Linking target lib/librte_pci.so.24.1 00:02:17.318 [235/268] Linking target lib/librte_ring.so.24.1 00:02:17.318 [236/268] Linking target lib/librte_timer.so.24.1 00:02:17.318 [237/268] Linking target lib/librte_meter.so.24.1 00:02:17.576 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.576 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:17.576 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.576 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.576 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.576 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.576 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.576 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:17.576 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.576 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:17.835 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.836 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.836 [250/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.836 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:17.836 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.836 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.093 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.093 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:18.093 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:18.093 [257/268] Linking target lib/librte_net.so.24.1 00:02:18.093 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:18.093 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.093 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.351 [261/268] Linking target lib/librte_hash.so.24.1 00:02:18.351 [262/268] Linking target lib/librte_security.so.24.1 00:02:18.351 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:18.351 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:18.351 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.351 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.610 [267/268] Linking target lib/librte_power.so.24.1 00:02:18.610 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.610 INFO: autodetecting backend as ninja 00:02:18.610 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:19.541 CC lib/log/log.o 00:02:19.541 CC lib/log/log_deprecated.o 00:02:19.541 CC lib/log/log_flags.o 00:02:19.541 CC lib/ut_mock/mock.o 00:02:19.541 CC lib/ut/ut.o 00:02:19.799 LIB libspdk_ut.a 00:02:19.799 LIB libspdk_ut_mock.a 00:02:19.799 LIB libspdk_log.a 00:02:19.799 SO libspdk_ut.so.2.0 00:02:19.799 SO libspdk_ut_mock.so.6.0 00:02:19.799 SO libspdk_log.so.7.0 00:02:20.057 SYMLINK libspdk_ut.so 00:02:20.057 SYMLINK libspdk_ut_mock.so 00:02:20.057 SYMLINK libspdk_log.so 00:02:20.057 CXX lib/trace_parser/trace.o 00:02:20.327 CC lib/dma/dma.o 00:02:20.327 CC lib/ioat/ioat.o 00:02:20.327 CC lib/util/base64.o 00:02:20.327 CC lib/util/bit_array.o 00:02:20.327 CC lib/util/cpuset.o 00:02:20.327 CC lib/util/crc32.o 00:02:20.327 CC lib/util/crc32c.o 00:02:20.327 CC lib/util/crc16.o 00:02:20.327 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.327 CC lib/util/crc32_ieee.o 00:02:20.327 CC lib/vfio_user/host/vfio_user.o 00:02:20.327 LIB libspdk_dma.a 00:02:20.327 CC lib/util/crc64.o 00:02:20.327 CC lib/util/dif.o 00:02:20.327 SO libspdk_dma.so.4.0 00:02:20.588 CC lib/util/fd.o 00:02:20.588 SYMLINK libspdk_dma.so 00:02:20.588 CC lib/util/file.o 00:02:20.588 CC lib/util/hexlify.o 00:02:20.588 CC lib/util/iov.o 00:02:20.588 LIB libspdk_ioat.a 00:02:20.588 CC lib/util/math.o 00:02:20.588 SO libspdk_ioat.so.7.0 00:02:20.588 LIB libspdk_vfio_user.a 00:02:20.588 CC lib/util/pipe.o 00:02:20.588 CC lib/util/strerror_tls.o 00:02:20.588 CC lib/util/string.o 00:02:20.588 SO libspdk_vfio_user.so.5.0 00:02:20.588 SYMLINK libspdk_ioat.so 00:02:20.588 CC lib/util/uuid.o 00:02:20.588 CC lib/util/fd_group.o 00:02:20.588 CC lib/util/xor.o 00:02:20.588 SYMLINK libspdk_vfio_user.so 00:02:20.588 CC lib/util/zipf.o 00:02:20.846 LIB libspdk_util.a 00:02:21.104 SO libspdk_util.so.9.1 00:02:21.104 LIB libspdk_trace_parser.a 00:02:21.104 SYMLINK libspdk_util.so 00:02:21.362 SO libspdk_trace_parser.so.5.0 00:02:21.362 SYMLINK libspdk_trace_parser.so 00:02:21.362 CC lib/idxd/idxd.o 00:02:21.362 CC lib/idxd/idxd_user.o 00:02:21.362 CC lib/json/json_parse.o 00:02:21.362 CC lib/json/json_util.o 00:02:21.362 CC lib/conf/conf.o 00:02:21.362 CC lib/idxd/idxd_kernel.o 00:02:21.362 CC lib/rdma_provider/common.o 00:02:21.362 CC lib/vmd/vmd.o 00:02:21.362 CC lib/rdma_utils/rdma_utils.o 00:02:21.362 CC lib/env_dpdk/env.o 00:02:21.621 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:21.621 CC lib/vmd/led.o 00:02:21.621 LIB libspdk_conf.a 00:02:21.621 CC lib/json/json_write.o 00:02:21.621 CC lib/env_dpdk/memory.o 00:02:21.621 CC lib/env_dpdk/pci.o 00:02:21.621 SO libspdk_conf.so.6.0 00:02:21.621 LIB libspdk_rdma_utils.a 00:02:21.621 SO libspdk_rdma_utils.so.1.0 00:02:21.621 SYMLINK libspdk_conf.so 00:02:21.621 CC lib/env_dpdk/init.o 00:02:21.621 CC lib/env_dpdk/threads.o 00:02:21.621 SYMLINK libspdk_rdma_utils.so 00:02:21.621 LIB libspdk_rdma_provider.a 00:02:21.621 CC lib/env_dpdk/pci_ioat.o 00:02:21.879 SO libspdk_rdma_provider.so.6.0 00:02:21.879 SYMLINK libspdk_rdma_provider.so 00:02:21.879 CC lib/env_dpdk/pci_virtio.o 00:02:21.879 CC lib/env_dpdk/pci_vmd.o 00:02:21.879 CC lib/env_dpdk/pci_idxd.o 00:02:21.879 LIB libspdk_json.a 00:02:21.879 LIB libspdk_idxd.a 00:02:21.879 SO libspdk_json.so.6.0 00:02:21.879 SO libspdk_idxd.so.12.0 00:02:21.879 CC lib/env_dpdk/pci_event.o 00:02:21.879 CC lib/env_dpdk/sigbus_handler.o 00:02:21.879 LIB libspdk_vmd.a 00:02:21.879 CC lib/env_dpdk/pci_dpdk.o 00:02:21.879 SYMLINK libspdk_json.so 00:02:22.136 SYMLINK libspdk_idxd.so 00:02:22.136 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.136 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:22.136 SO libspdk_vmd.so.6.0 00:02:22.136 SYMLINK libspdk_vmd.so 00:02:22.136 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:22.136 CC lib/jsonrpc/jsonrpc_server.o 00:02:22.136 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.136 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.394 LIB libspdk_jsonrpc.a 00:02:22.652 SO libspdk_jsonrpc.so.6.0 00:02:22.652 SYMLINK libspdk_jsonrpc.so 00:02:22.652 LIB libspdk_env_dpdk.a 00:02:22.910 SO libspdk_env_dpdk.so.14.1 00:02:22.910 CC lib/rpc/rpc.o 00:02:22.910 SYMLINK libspdk_env_dpdk.so 00:02:23.168 LIB libspdk_rpc.a 00:02:23.168 SO libspdk_rpc.so.6.0 00:02:23.168 SYMLINK libspdk_rpc.so 00:02:23.426 CC lib/notify/notify.o 00:02:23.426 CC lib/notify/notify_rpc.o 00:02:23.426 CC lib/trace/trace.o 00:02:23.426 CC lib/trace/trace_rpc.o 00:02:23.426 CC lib/trace/trace_flags.o 00:02:23.426 CC lib/keyring/keyring.o 00:02:23.426 CC lib/keyring/keyring_rpc.o 00:02:23.684 LIB libspdk_notify.a 00:02:23.684 LIB libspdk_keyring.a 00:02:23.684 LIB libspdk_trace.a 00:02:23.684 SO libspdk_notify.so.6.0 00:02:23.684 SO libspdk_keyring.so.1.0 00:02:23.684 SO libspdk_trace.so.10.0 00:02:23.684 SYMLINK libspdk_notify.so 00:02:23.684 SYMLINK libspdk_keyring.so 00:02:23.684 SYMLINK libspdk_trace.so 00:02:23.941 CC lib/thread/thread.o 00:02:23.942 CC lib/thread/iobuf.o 00:02:23.942 CC lib/sock/sock_rpc.o 00:02:23.942 CC lib/sock/sock.o 00:02:24.507 LIB libspdk_sock.a 00:02:24.507 SO libspdk_sock.so.10.0 00:02:24.507 SYMLINK libspdk_sock.so 00:02:24.766 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.766 CC lib/nvme/nvme_ctrlr.o 00:02:24.766 CC lib/nvme/nvme_fabric.o 00:02:24.766 CC lib/nvme/nvme_ns_cmd.o 00:02:24.766 CC lib/nvme/nvme_ns.o 00:02:24.766 CC lib/nvme/nvme_pcie_common.o 00:02:24.766 CC lib/nvme/nvme_pcie.o 00:02:24.766 CC lib/nvme/nvme.o 00:02:24.766 CC lib/nvme/nvme_qpair.o 00:02:25.703 LIB libspdk_thread.a 00:02:25.703 CC lib/nvme/nvme_quirks.o 00:02:25.703 CC lib/nvme/nvme_transport.o 00:02:25.703 SO libspdk_thread.so.10.1 00:02:25.703 CC lib/nvme/nvme_discovery.o 00:02:25.703 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.703 SYMLINK libspdk_thread.so 00:02:25.703 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.703 CC lib/nvme/nvme_tcp.o 00:02:25.703 CC lib/nvme/nvme_opal.o 00:02:25.703 CC lib/nvme/nvme_io_msg.o 00:02:26.047 CC lib/nvme/nvme_poll_group.o 00:02:26.305 CC lib/nvme/nvme_zns.o 00:02:26.305 CC lib/nvme/nvme_stubs.o 00:02:26.305 CC lib/nvme/nvme_auth.o 00:02:26.305 CC lib/nvme/nvme_cuse.o 00:02:26.305 CC lib/nvme/nvme_rdma.o 00:02:26.563 CC lib/accel/accel.o 00:02:26.563 CC lib/blob/blobstore.o 00:02:26.563 CC lib/blob/request.o 00:02:26.821 CC lib/accel/accel_rpc.o 00:02:27.081 CC lib/accel/accel_sw.o 00:02:27.081 CC lib/init/json_config.o 00:02:27.081 CC lib/virtio/virtio.o 00:02:27.081 CC lib/virtio/virtio_vhost_user.o 00:02:27.081 CC lib/blob/zeroes.o 00:02:27.081 CC lib/init/subsystem.o 00:02:27.341 CC lib/init/subsystem_rpc.o 00:02:27.341 CC lib/virtio/virtio_vfio_user.o 00:02:27.341 CC lib/virtio/virtio_pci.o 00:02:27.341 CC lib/blob/blob_bs_dev.o 00:02:27.341 CC lib/init/rpc.o 00:02:27.601 LIB libspdk_accel.a 00:02:27.601 LIB libspdk_init.a 00:02:27.601 SO libspdk_accel.so.15.1 00:02:27.601 LIB libspdk_virtio.a 00:02:27.601 SO libspdk_init.so.5.0 00:02:27.601 LIB libspdk_nvme.a 00:02:27.601 SYMLINK libspdk_accel.so 00:02:27.601 SO libspdk_virtio.so.7.0 00:02:27.601 SYMLINK libspdk_init.so 00:02:27.857 SYMLINK libspdk_virtio.so 00:02:27.857 SO libspdk_nvme.so.13.1 00:02:27.857 CC lib/bdev/bdev.o 00:02:27.857 CC lib/bdev/bdev_rpc.o 00:02:27.857 CC lib/bdev/part.o 00:02:27.857 CC lib/bdev/scsi_nvme.o 00:02:27.857 CC lib/bdev/bdev_zone.o 00:02:27.857 CC lib/event/app.o 00:02:27.857 CC lib/event/log_rpc.o 00:02:27.857 CC lib/event/reactor.o 00:02:28.115 CC lib/event/app_rpc.o 00:02:28.115 CC lib/event/scheduler_static.o 00:02:28.115 SYMLINK libspdk_nvme.so 00:02:28.372 LIB libspdk_event.a 00:02:28.372 SO libspdk_event.so.14.0 00:02:28.630 SYMLINK libspdk_event.so 00:02:29.563 LIB libspdk_blob.a 00:02:29.563 SO libspdk_blob.so.11.0 00:02:29.821 SYMLINK libspdk_blob.so 00:02:30.080 CC lib/lvol/lvol.o 00:02:30.080 CC lib/blobfs/blobfs.o 00:02:30.080 CC lib/blobfs/tree.o 00:02:30.645 LIB libspdk_bdev.a 00:02:30.645 SO libspdk_bdev.so.15.1 00:02:30.903 LIB libspdk_blobfs.a 00:02:30.903 SYMLINK libspdk_bdev.so 00:02:30.903 SO libspdk_blobfs.so.10.0 00:02:30.903 LIB libspdk_lvol.a 00:02:30.903 SO libspdk_lvol.so.10.0 00:02:30.903 SYMLINK libspdk_blobfs.so 00:02:31.161 SYMLINK libspdk_lvol.so 00:02:31.161 CC lib/nvmf/ctrlr.o 00:02:31.161 CC lib/nvmf/ctrlr_discovery.o 00:02:31.161 CC lib/nvmf/ctrlr_bdev.o 00:02:31.161 CC lib/nvmf/subsystem.o 00:02:31.161 CC lib/nvmf/nvmf.o 00:02:31.161 CC lib/nbd/nbd.o 00:02:31.161 CC lib/nbd/nbd_rpc.o 00:02:31.161 CC lib/ublk/ublk.o 00:02:31.161 CC lib/ftl/ftl_core.o 00:02:31.161 CC lib/scsi/dev.o 00:02:31.161 CC lib/ftl/ftl_init.o 00:02:31.418 CC lib/scsi/lun.o 00:02:31.418 CC lib/scsi/port.o 00:02:31.418 CC lib/ftl/ftl_layout.o 00:02:31.675 LIB libspdk_nbd.a 00:02:31.675 CC lib/ftl/ftl_debug.o 00:02:31.675 SO libspdk_nbd.so.7.0 00:02:31.675 CC lib/ftl/ftl_io.o 00:02:31.675 SYMLINK libspdk_nbd.so 00:02:31.675 CC lib/ftl/ftl_sb.o 00:02:31.675 CC lib/scsi/scsi.o 00:02:31.675 CC lib/scsi/scsi_bdev.o 00:02:31.675 CC lib/ublk/ublk_rpc.o 00:02:31.933 CC lib/scsi/scsi_pr.o 00:02:31.933 CC lib/scsi/scsi_rpc.o 00:02:31.933 CC lib/ftl/ftl_l2p.o 00:02:31.933 CC lib/scsi/task.o 00:02:31.933 CC lib/ftl/ftl_l2p_flat.o 00:02:31.933 LIB libspdk_ublk.a 00:02:31.933 SO libspdk_ublk.so.3.0 00:02:31.933 CC lib/ftl/ftl_nv_cache.o 00:02:31.933 CC lib/ftl/ftl_band.o 00:02:31.933 SYMLINK libspdk_ublk.so 00:02:31.933 CC lib/ftl/ftl_band_ops.o 00:02:31.933 CC lib/ftl/ftl_writer.o 00:02:32.190 CC lib/ftl/ftl_rq.o 00:02:32.190 CC lib/nvmf/nvmf_rpc.o 00:02:32.190 CC lib/nvmf/transport.o 00:02:32.190 LIB libspdk_scsi.a 00:02:32.190 SO libspdk_scsi.so.9.0 00:02:32.190 CC lib/ftl/ftl_reloc.o 00:02:32.190 CC lib/nvmf/tcp.o 00:02:32.447 CC lib/nvmf/stubs.o 00:02:32.447 CC lib/nvmf/mdns_server.o 00:02:32.447 SYMLINK libspdk_scsi.so 00:02:32.447 CC lib/nvmf/rdma.o 00:02:32.447 CC lib/nvmf/auth.o 00:02:32.757 CC lib/ftl/ftl_p2l.o 00:02:32.757 CC lib/ftl/ftl_l2p_cache.o 00:02:32.757 CC lib/iscsi/conn.o 00:02:32.757 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.015 CC lib/vhost/vhost.o 00:02:33.015 CC lib/iscsi/init_grp.o 00:02:33.015 CC lib/vhost/vhost_rpc.o 00:02:33.015 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.015 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.273 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.273 CC lib/iscsi/iscsi.o 00:02:33.273 CC lib/iscsi/md5.o 00:02:33.273 CC lib/iscsi/param.o 00:02:33.273 CC lib/iscsi/portal_grp.o 00:02:33.273 CC lib/iscsi/tgt_node.o 00:02:33.273 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.530 CC lib/iscsi/iscsi_subsystem.o 00:02:33.531 CC lib/vhost/vhost_scsi.o 00:02:33.531 CC lib/vhost/vhost_blk.o 00:02:33.531 CC lib/vhost/rte_vhost_user.o 00:02:33.788 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.788 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.788 CC lib/iscsi/iscsi_rpc.o 00:02:33.788 CC lib/iscsi/task.o 00:02:34.046 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:34.046 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:34.046 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:34.046 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:34.046 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:34.046 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:34.046 CC lib/ftl/utils/ftl_conf.o 00:02:34.304 CC lib/ftl/utils/ftl_md.o 00:02:34.304 CC lib/ftl/utils/ftl_mempool.o 00:02:34.304 LIB libspdk_nvmf.a 00:02:34.304 CC lib/ftl/utils/ftl_bitmap.o 00:02:34.304 CC lib/ftl/utils/ftl_property.o 00:02:34.563 SO libspdk_nvmf.so.18.1 00:02:34.563 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:34.563 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:34.563 LIB libspdk_iscsi.a 00:02:34.563 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:34.563 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:34.563 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:34.563 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:34.563 SO libspdk_iscsi.so.8.0 00:02:34.821 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:34.821 SYMLINK libspdk_nvmf.so 00:02:34.821 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:34.821 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:34.821 LIB libspdk_vhost.a 00:02:34.821 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:34.821 SO libspdk_vhost.so.8.0 00:02:34.821 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:34.821 CC lib/ftl/base/ftl_base_dev.o 00:02:34.821 SYMLINK libspdk_iscsi.so 00:02:34.821 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.821 CC lib/ftl/ftl_trace.o 00:02:34.821 SYMLINK libspdk_vhost.so 00:02:35.080 LIB libspdk_ftl.a 00:02:35.338 SO libspdk_ftl.so.9.0 00:02:35.971 SYMLINK libspdk_ftl.so 00:02:36.230 CC module/env_dpdk/env_dpdk_rpc.o 00:02:36.230 CC module/keyring/file/keyring.o 00:02:36.230 CC module/blob/bdev/blob_bdev.o 00:02:36.230 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.230 CC module/accel/ioat/accel_ioat.o 00:02:36.230 CC module/keyring/linux/keyring.o 00:02:36.230 CC module/sock/posix/posix.o 00:02:36.230 CC module/accel/iaa/accel_iaa.o 00:02:36.230 CC module/accel/error/accel_error.o 00:02:36.230 CC module/accel/dsa/accel_dsa.o 00:02:36.230 LIB libspdk_env_dpdk_rpc.a 00:02:36.230 SO libspdk_env_dpdk_rpc.so.6.0 00:02:36.230 CC module/keyring/file/keyring_rpc.o 00:02:36.488 CC module/keyring/linux/keyring_rpc.o 00:02:36.488 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.488 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.488 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.488 CC module/accel/error/accel_error_rpc.o 00:02:36.488 LIB libspdk_scheduler_dynamic.a 00:02:36.488 LIB libspdk_blob_bdev.a 00:02:36.488 SO libspdk_scheduler_dynamic.so.4.0 00:02:36.488 LIB libspdk_keyring_file.a 00:02:36.488 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.488 SO libspdk_blob_bdev.so.11.0 00:02:36.488 LIB libspdk_keyring_linux.a 00:02:36.488 LIB libspdk_accel_iaa.a 00:02:36.488 SO libspdk_keyring_file.so.1.0 00:02:36.488 LIB libspdk_accel_ioat.a 00:02:36.488 SO libspdk_keyring_linux.so.1.0 00:02:36.488 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.488 LIB libspdk_accel_error.a 00:02:36.488 SO libspdk_accel_iaa.so.3.0 00:02:36.488 SO libspdk_accel_ioat.so.6.0 00:02:36.488 SYMLINK libspdk_blob_bdev.so 00:02:36.488 SO libspdk_accel_error.so.2.0 00:02:36.489 SYMLINK libspdk_keyring_file.so 00:02:36.489 SYMLINK libspdk_keyring_linux.so 00:02:36.747 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.747 SYMLINK libspdk_accel_ioat.so 00:02:36.747 SYMLINK libspdk_accel_iaa.so 00:02:36.747 SYMLINK libspdk_accel_error.so 00:02:36.747 LIB libspdk_accel_dsa.a 00:02:36.747 SO libspdk_accel_dsa.so.5.0 00:02:36.747 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.747 SYMLINK libspdk_accel_dsa.so 00:02:36.747 CC module/sock/uring/uring.o 00:02:36.747 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.747 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.747 CC module/bdev/delay/vbdev_delay.o 00:02:36.747 CC module/bdev/lvol/vbdev_lvol.o 00:02:37.006 CC module/blobfs/bdev/blobfs_bdev.o 00:02:37.006 CC module/bdev/error/vbdev_error.o 00:02:37.006 LIB libspdk_scheduler_gscheduler.a 00:02:37.006 CC module/bdev/gpt/gpt.o 00:02:37.006 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:37.006 CC module/bdev/error/vbdev_error_rpc.o 00:02:37.006 SO libspdk_scheduler_gscheduler.so.4.0 00:02:37.006 CC module/bdev/malloc/bdev_malloc.o 00:02:37.006 LIB libspdk_sock_posix.a 00:02:37.006 SO libspdk_sock_posix.so.6.0 00:02:37.006 SYMLINK libspdk_scheduler_gscheduler.so 00:02:37.006 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.006 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:37.006 CC module/bdev/gpt/vbdev_gpt.o 00:02:37.006 SYMLINK libspdk_sock_posix.so 00:02:37.006 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:37.268 LIB libspdk_bdev_error.a 00:02:37.268 SO libspdk_bdev_error.so.6.0 00:02:37.268 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:37.268 LIB libspdk_blobfs_bdev.a 00:02:37.268 SYMLINK libspdk_bdev_error.so 00:02:37.268 SO libspdk_blobfs_bdev.so.6.0 00:02:37.268 LIB libspdk_bdev_delay.a 00:02:37.268 CC module/bdev/null/bdev_null.o 00:02:37.268 SO libspdk_bdev_delay.so.6.0 00:02:37.268 SYMLINK libspdk_blobfs_bdev.so 00:02:37.528 LIB libspdk_bdev_lvol.a 00:02:37.528 LIB libspdk_bdev_gpt.a 00:02:37.528 SO libspdk_bdev_lvol.so.6.0 00:02:37.528 LIB libspdk_bdev_malloc.a 00:02:37.528 SO libspdk_bdev_gpt.so.6.0 00:02:37.528 SYMLINK libspdk_bdev_delay.so 00:02:37.528 CC module/bdev/null/bdev_null_rpc.o 00:02:37.528 CC module/bdev/nvme/bdev_nvme.o 00:02:37.528 SO libspdk_bdev_malloc.so.6.0 00:02:37.528 LIB libspdk_sock_uring.a 00:02:37.528 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.528 SO libspdk_sock_uring.so.5.0 00:02:37.528 SYMLINK libspdk_bdev_gpt.so 00:02:37.528 SYMLINK libspdk_bdev_lvol.so 00:02:37.528 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.528 SYMLINK libspdk_bdev_malloc.so 00:02:37.528 CC module/bdev/raid/bdev_raid.o 00:02:37.528 CC module/bdev/split/vbdev_split.o 00:02:37.528 SYMLINK libspdk_sock_uring.so 00:02:37.528 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.528 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.528 LIB libspdk_bdev_null.a 00:02:37.786 SO libspdk_bdev_null.so.6.0 00:02:37.786 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.786 CC module/bdev/raid/raid0.o 00:02:37.787 CC module/bdev/uring/bdev_uring.o 00:02:37.787 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.787 SYMLINK libspdk_bdev_null.so 00:02:37.787 LIB libspdk_bdev_split.a 00:02:37.787 LIB libspdk_bdev_passthru.a 00:02:37.787 SO libspdk_bdev_passthru.so.6.0 00:02:37.787 SO libspdk_bdev_split.so.6.0 00:02:38.045 SYMLINK libspdk_bdev_split.so 00:02:38.045 SYMLINK libspdk_bdev_passthru.so 00:02:38.045 CC module/bdev/uring/bdev_uring_rpc.o 00:02:38.045 CC module/bdev/raid/raid1.o 00:02:38.045 CC module/bdev/aio/bdev_aio.o 00:02:38.045 CC module/bdev/raid/concat.o 00:02:38.045 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:38.045 CC module/bdev/ftl/bdev_ftl.o 00:02:38.045 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.045 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:38.045 LIB libspdk_bdev_uring.a 00:02:38.045 SO libspdk_bdev_uring.so.6.0 00:02:38.304 SYMLINK libspdk_bdev_uring.so 00:02:38.304 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:38.304 LIB libspdk_bdev_zone_block.a 00:02:38.304 CC module/bdev/nvme/nvme_rpc.o 00:02:38.304 SO libspdk_bdev_zone_block.so.6.0 00:02:38.304 CC module/bdev/nvme/bdev_mdns_client.o 00:02:38.304 LIB libspdk_bdev_aio.a 00:02:38.304 SYMLINK libspdk_bdev_zone_block.so 00:02:38.304 CC module/bdev/iscsi/bdev_iscsi.o 00:02:38.304 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:38.304 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:38.304 SO libspdk_bdev_aio.so.6.0 00:02:38.563 LIB libspdk_bdev_ftl.a 00:02:38.563 SYMLINK libspdk_bdev_aio.so 00:02:38.563 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.563 LIB libspdk_bdev_raid.a 00:02:38.563 SO libspdk_bdev_ftl.so.6.0 00:02:38.563 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.563 CC module/bdev/nvme/vbdev_opal.o 00:02:38.563 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:38.563 SO libspdk_bdev_raid.so.6.0 00:02:38.563 SYMLINK libspdk_bdev_ftl.so 00:02:38.563 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:38.563 SYMLINK libspdk_bdev_raid.so 00:02:38.822 LIB libspdk_bdev_iscsi.a 00:02:38.822 SO libspdk_bdev_iscsi.so.6.0 00:02:38.822 SYMLINK libspdk_bdev_iscsi.so 00:02:38.822 LIB libspdk_bdev_virtio.a 00:02:38.822 SO libspdk_bdev_virtio.so.6.0 00:02:39.081 SYMLINK libspdk_bdev_virtio.so 00:02:39.647 LIB libspdk_bdev_nvme.a 00:02:39.647 SO libspdk_bdev_nvme.so.7.0 00:02:39.905 SYMLINK libspdk_bdev_nvme.so 00:02:40.473 CC module/event/subsystems/iobuf/iobuf.o 00:02:40.473 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:40.473 CC module/event/subsystems/scheduler/scheduler.o 00:02:40.473 CC module/event/subsystems/keyring/keyring.o 00:02:40.473 CC module/event/subsystems/vmd/vmd.o 00:02:40.473 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:40.473 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:40.473 CC module/event/subsystems/sock/sock.o 00:02:40.473 LIB libspdk_event_vhost_blk.a 00:02:40.473 LIB libspdk_event_keyring.a 00:02:40.473 LIB libspdk_event_scheduler.a 00:02:40.473 LIB libspdk_event_vmd.a 00:02:40.473 LIB libspdk_event_iobuf.a 00:02:40.473 LIB libspdk_event_sock.a 00:02:40.473 SO libspdk_event_vhost_blk.so.3.0 00:02:40.473 SO libspdk_event_keyring.so.1.0 00:02:40.473 SO libspdk_event_scheduler.so.4.0 00:02:40.473 SO libspdk_event_vmd.so.6.0 00:02:40.473 SO libspdk_event_iobuf.so.3.0 00:02:40.473 SO libspdk_event_sock.so.5.0 00:02:40.473 SYMLINK libspdk_event_keyring.so 00:02:40.473 SYMLINK libspdk_event_vhost_blk.so 00:02:40.473 SYMLINK libspdk_event_scheduler.so 00:02:40.733 SYMLINK libspdk_event_sock.so 00:02:40.733 SYMLINK libspdk_event_vmd.so 00:02:40.733 SYMLINK libspdk_event_iobuf.so 00:02:40.990 CC module/event/subsystems/accel/accel.o 00:02:40.990 LIB libspdk_event_accel.a 00:02:40.990 SO libspdk_event_accel.so.6.0 00:02:41.249 SYMLINK libspdk_event_accel.so 00:02:41.508 CC module/event/subsystems/bdev/bdev.o 00:02:41.767 LIB libspdk_event_bdev.a 00:02:41.767 SO libspdk_event_bdev.so.6.0 00:02:41.767 SYMLINK libspdk_event_bdev.so 00:02:42.027 CC module/event/subsystems/ublk/ublk.o 00:02:42.027 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:42.027 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:42.027 CC module/event/subsystems/scsi/scsi.o 00:02:42.027 CC module/event/subsystems/nbd/nbd.o 00:02:42.027 LIB libspdk_event_ublk.a 00:02:42.286 LIB libspdk_event_scsi.a 00:02:42.286 SO libspdk_event_ublk.so.3.0 00:02:42.286 SO libspdk_event_scsi.so.6.0 00:02:42.286 LIB libspdk_event_nbd.a 00:02:42.286 SYMLINK libspdk_event_ublk.so 00:02:42.286 SO libspdk_event_nbd.so.6.0 00:02:42.286 SYMLINK libspdk_event_scsi.so 00:02:42.286 LIB libspdk_event_nvmf.a 00:02:42.286 SYMLINK libspdk_event_nbd.so 00:02:42.286 SO libspdk_event_nvmf.so.6.0 00:02:42.545 SYMLINK libspdk_event_nvmf.so 00:02:42.545 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.545 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.805 LIB libspdk_event_vhost_scsi.a 00:02:42.805 LIB libspdk_event_iscsi.a 00:02:42.805 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.805 SO libspdk_event_iscsi.so.6.0 00:02:42.805 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.805 SYMLINK libspdk_event_iscsi.so 00:02:43.065 SO libspdk.so.6.0 00:02:43.065 SYMLINK libspdk.so 00:02:43.324 CC app/trace_record/trace_record.o 00:02:43.324 CC app/spdk_lspci/spdk_lspci.o 00:02:43.324 CXX app/trace/trace.o 00:02:43.324 CC app/nvmf_tgt/nvmf_main.o 00:02:43.324 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.324 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.324 CC app/spdk_tgt/spdk_tgt.o 00:02:43.324 CC examples/util/zipf/zipf.o 00:02:43.324 CC examples/ioat/perf/perf.o 00:02:43.324 CC test/thread/poller_perf/poller_perf.o 00:02:43.324 LINK spdk_lspci 00:02:43.583 LINK nvmf_tgt 00:02:43.583 LINK interrupt_tgt 00:02:43.583 LINK zipf 00:02:43.583 LINK spdk_trace_record 00:02:43.583 LINK poller_perf 00:02:43.583 LINK iscsi_tgt 00:02:43.583 LINK spdk_tgt 00:02:43.583 LINK ioat_perf 00:02:43.583 CC examples/ioat/verify/verify.o 00:02:43.842 LINK spdk_trace 00:02:43.842 CC app/spdk_nvme_perf/perf.o 00:02:43.842 CC app/spdk_nvme_identify/identify.o 00:02:43.842 LINK verify 00:02:43.842 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.842 CC app/spdk_top/spdk_top.o 00:02:43.842 CC examples/thread/thread/thread_ex.o 00:02:43.842 CC test/dma/test_dma/test_dma.o 00:02:44.100 CC test/app/bdev_svc/bdev_svc.o 00:02:44.100 CC app/spdk_dd/spdk_dd.o 00:02:44.100 CC app/fio/nvme/fio_plugin.o 00:02:44.100 LINK spdk_nvme_discover 00:02:44.100 CC app/vhost/vhost.o 00:02:44.100 LINK bdev_svc 00:02:44.359 LINK thread 00:02:44.359 LINK vhost 00:02:44.359 LINK test_dma 00:02:44.359 CC app/fio/bdev/fio_plugin.o 00:02:44.359 LINK spdk_dd 00:02:44.618 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:44.618 LINK spdk_nvme_identify 00:02:44.618 CC examples/sock/hello_world/hello_sock.o 00:02:44.618 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.618 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.618 LINK spdk_nvme 00:02:44.618 LINK spdk_nvme_perf 00:02:44.618 LINK spdk_top 00:02:44.876 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.876 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.876 TEST_HEADER include/spdk/accel.h 00:02:44.876 TEST_HEADER include/spdk/accel_module.h 00:02:44.876 TEST_HEADER include/spdk/assert.h 00:02:44.876 LINK hello_sock 00:02:44.876 TEST_HEADER include/spdk/barrier.h 00:02:44.876 TEST_HEADER include/spdk/base64.h 00:02:44.876 TEST_HEADER include/spdk/bdev.h 00:02:44.876 TEST_HEADER include/spdk/bdev_module.h 00:02:44.876 TEST_HEADER include/spdk/bdev_zone.h 00:02:44.876 TEST_HEADER include/spdk/bit_array.h 00:02:44.876 TEST_HEADER include/spdk/bit_pool.h 00:02:44.876 TEST_HEADER include/spdk/blob_bdev.h 00:02:44.876 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:44.877 CC examples/vmd/led/led.o 00:02:44.877 TEST_HEADER include/spdk/blobfs.h 00:02:44.877 TEST_HEADER include/spdk/blob.h 00:02:44.877 TEST_HEADER include/spdk/conf.h 00:02:44.877 TEST_HEADER include/spdk/config.h 00:02:44.877 LINK spdk_bdev 00:02:44.877 TEST_HEADER include/spdk/cpuset.h 00:02:44.877 TEST_HEADER include/spdk/crc16.h 00:02:44.877 TEST_HEADER include/spdk/crc32.h 00:02:44.877 TEST_HEADER include/spdk/crc64.h 00:02:44.877 TEST_HEADER include/spdk/dif.h 00:02:44.877 TEST_HEADER include/spdk/dma.h 00:02:44.877 TEST_HEADER include/spdk/endian.h 00:02:44.877 TEST_HEADER include/spdk/env_dpdk.h 00:02:44.877 TEST_HEADER include/spdk/env.h 00:02:44.877 TEST_HEADER include/spdk/event.h 00:02:44.877 LINK nvme_fuzz 00:02:44.877 TEST_HEADER include/spdk/fd_group.h 00:02:44.877 TEST_HEADER include/spdk/fd.h 00:02:44.877 TEST_HEADER include/spdk/file.h 00:02:44.877 TEST_HEADER include/spdk/ftl.h 00:02:44.877 TEST_HEADER include/spdk/gpt_spec.h 00:02:44.877 TEST_HEADER include/spdk/hexlify.h 00:02:44.877 TEST_HEADER include/spdk/histogram_data.h 00:02:44.877 TEST_HEADER include/spdk/idxd.h 00:02:44.877 TEST_HEADER include/spdk/idxd_spec.h 00:02:44.877 TEST_HEADER include/spdk/init.h 00:02:44.877 TEST_HEADER include/spdk/ioat.h 00:02:44.877 TEST_HEADER include/spdk/ioat_spec.h 00:02:45.134 TEST_HEADER include/spdk/iscsi_spec.h 00:02:45.134 TEST_HEADER include/spdk/json.h 00:02:45.134 TEST_HEADER include/spdk/jsonrpc.h 00:02:45.134 TEST_HEADER include/spdk/keyring.h 00:02:45.134 TEST_HEADER include/spdk/keyring_module.h 00:02:45.134 CC test/app/histogram_perf/histogram_perf.o 00:02:45.134 TEST_HEADER include/spdk/likely.h 00:02:45.134 LINK lsvmd 00:02:45.134 TEST_HEADER include/spdk/log.h 00:02:45.134 TEST_HEADER include/spdk/lvol.h 00:02:45.134 TEST_HEADER include/spdk/memory.h 00:02:45.134 TEST_HEADER include/spdk/mmio.h 00:02:45.134 TEST_HEADER include/spdk/nbd.h 00:02:45.134 TEST_HEADER include/spdk/notify.h 00:02:45.134 TEST_HEADER include/spdk/nvme.h 00:02:45.134 TEST_HEADER include/spdk/nvme_intel.h 00:02:45.134 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:45.134 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:45.134 TEST_HEADER include/spdk/nvme_spec.h 00:02:45.134 TEST_HEADER include/spdk/nvme_zns.h 00:02:45.134 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:45.134 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:45.134 TEST_HEADER include/spdk/nvmf.h 00:02:45.134 TEST_HEADER include/spdk/nvmf_spec.h 00:02:45.134 TEST_HEADER include/spdk/nvmf_transport.h 00:02:45.134 TEST_HEADER include/spdk/opal.h 00:02:45.134 TEST_HEADER include/spdk/opal_spec.h 00:02:45.134 TEST_HEADER include/spdk/pci_ids.h 00:02:45.134 TEST_HEADER include/spdk/pipe.h 00:02:45.134 TEST_HEADER include/spdk/queue.h 00:02:45.134 TEST_HEADER include/spdk/reduce.h 00:02:45.134 TEST_HEADER include/spdk/rpc.h 00:02:45.134 TEST_HEADER include/spdk/scheduler.h 00:02:45.134 TEST_HEADER include/spdk/scsi.h 00:02:45.134 TEST_HEADER include/spdk/scsi_spec.h 00:02:45.134 TEST_HEADER include/spdk/sock.h 00:02:45.134 TEST_HEADER include/spdk/stdinc.h 00:02:45.134 TEST_HEADER include/spdk/string.h 00:02:45.134 TEST_HEADER include/spdk/thread.h 00:02:45.134 TEST_HEADER include/spdk/trace.h 00:02:45.134 TEST_HEADER include/spdk/trace_parser.h 00:02:45.134 TEST_HEADER include/spdk/tree.h 00:02:45.134 TEST_HEADER include/spdk/ublk.h 00:02:45.134 TEST_HEADER include/spdk/util.h 00:02:45.134 TEST_HEADER include/spdk/uuid.h 00:02:45.134 TEST_HEADER include/spdk/version.h 00:02:45.134 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:45.134 LINK led 00:02:45.134 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:45.134 TEST_HEADER include/spdk/vhost.h 00:02:45.134 TEST_HEADER include/spdk/vmd.h 00:02:45.134 TEST_HEADER include/spdk/xor.h 00:02:45.134 TEST_HEADER include/spdk/zipf.h 00:02:45.134 CXX test/cpp_headers/accel.o 00:02:45.134 CXX test/cpp_headers/accel_module.o 00:02:45.134 LINK histogram_perf 00:02:45.134 CC test/app/jsoncat/jsoncat.o 00:02:45.134 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.134 LINK vhost_fuzz 00:02:45.134 CC test/event/event_perf/event_perf.o 00:02:45.391 CC test/event/reactor/reactor.o 00:02:45.391 CXX test/cpp_headers/assert.o 00:02:45.391 LINK jsoncat 00:02:45.391 LINK event_perf 00:02:45.391 LINK reactor 00:02:45.391 CC examples/accel/perf/accel_perf.o 00:02:45.391 CC examples/idxd/perf/perf.o 00:02:45.391 CXX test/cpp_headers/barrier.o 00:02:45.649 CC examples/blob/cli/blobcli.o 00:02:45.649 CC examples/blob/hello_world/hello_blob.o 00:02:45.649 CC examples/nvme/hello_world/hello_world.o 00:02:45.649 CC test/app/stub/stub.o 00:02:45.649 CXX test/cpp_headers/base64.o 00:02:45.649 CC test/event/reactor_perf/reactor_perf.o 00:02:45.906 LINK idxd_perf 00:02:45.906 LINK reactor_perf 00:02:45.906 CXX test/cpp_headers/bdev.o 00:02:45.906 LINK hello_world 00:02:45.906 LINK hello_blob 00:02:45.906 LINK stub 00:02:45.906 LINK mem_callbacks 00:02:45.906 CXX test/cpp_headers/bdev_module.o 00:02:46.164 LINK accel_perf 00:02:46.164 LINK blobcli 00:02:46.164 CC test/event/app_repeat/app_repeat.o 00:02:46.164 CC test/env/vtophys/vtophys.o 00:02:46.164 CC examples/nvme/reconnect/reconnect.o 00:02:46.164 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.164 CC test/rpc_client/rpc_client_test.o 00:02:46.164 CXX test/cpp_headers/bdev_zone.o 00:02:46.164 CXX test/cpp_headers/bit_array.o 00:02:46.164 CC test/nvme/aer/aer.o 00:02:46.164 LINK iscsi_fuzz 00:02:46.422 LINK app_repeat 00:02:46.422 LINK vtophys 00:02:46.422 LINK env_dpdk_post_init 00:02:46.422 CXX test/cpp_headers/bit_pool.o 00:02:46.422 LINK rpc_client_test 00:02:46.422 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.422 CC examples/nvme/arbitration/arbitration.o 00:02:46.422 LINK reconnect 00:02:46.680 LINK aer 00:02:46.680 CXX test/cpp_headers/blob_bdev.o 00:02:46.680 CC test/event/scheduler/scheduler.o 00:02:46.680 CC test/env/memory/memory_ut.o 00:02:46.680 CC test/env/pci/pci_ut.o 00:02:46.680 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.680 CXX test/cpp_headers/blobfs_bdev.o 00:02:46.939 CC test/accel/dif/dif.o 00:02:46.939 CC test/nvme/reset/reset.o 00:02:46.939 LINK arbitration 00:02:46.939 LINK scheduler 00:02:46.939 CC test/blobfs/mkfs/mkfs.o 00:02:46.939 LINK nvme_manage 00:02:46.939 LINK hello_bdev 00:02:46.939 CXX test/cpp_headers/blobfs.o 00:02:46.939 CXX test/cpp_headers/blob.o 00:02:46.939 CXX test/cpp_headers/conf.o 00:02:47.197 LINK reset 00:02:47.197 LINK pci_ut 00:02:47.197 LINK mkfs 00:02:47.197 CC examples/nvme/hotplug/hotplug.o 00:02:47.197 CXX test/cpp_headers/config.o 00:02:47.197 CXX test/cpp_headers/cpuset.o 00:02:47.197 LINK dif 00:02:47.197 CC test/nvme/sgl/sgl.o 00:02:47.197 CC test/nvme/e2edp/nvme_dp.o 00:02:47.456 CC examples/bdev/bdevperf/bdevperf.o 00:02:47.456 CC test/nvme/overhead/overhead.o 00:02:47.456 LINK hotplug 00:02:47.456 CXX test/cpp_headers/crc16.o 00:02:47.456 CC test/nvme/err_injection/err_injection.o 00:02:47.456 CC test/lvol/esnap/esnap.o 00:02:47.456 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.456 LINK sgl 00:02:47.456 LINK nvme_dp 00:02:47.714 CXX test/cpp_headers/crc32.o 00:02:47.714 LINK err_injection 00:02:47.714 LINK overhead 00:02:47.714 LINK memory_ut 00:02:47.714 LINK cmb_copy 00:02:47.714 CXX test/cpp_headers/crc64.o 00:02:47.714 CC test/bdev/bdevio/bdevio.o 00:02:47.714 CC test/nvme/startup/startup.o 00:02:47.972 CC test/nvme/reserve/reserve.o 00:02:47.972 CXX test/cpp_headers/dif.o 00:02:47.972 CC test/nvme/simple_copy/simple_copy.o 00:02:47.972 CC test/nvme/connect_stress/connect_stress.o 00:02:47.972 LINK startup 00:02:47.972 CC examples/nvme/abort/abort.o 00:02:47.972 CC test/nvme/boot_partition/boot_partition.o 00:02:47.972 LINK reserve 00:02:47.972 LINK bdevperf 00:02:47.972 CXX test/cpp_headers/dma.o 00:02:48.230 LINK connect_stress 00:02:48.230 LINK simple_copy 00:02:48.230 LINK bdevio 00:02:48.230 LINK boot_partition 00:02:48.230 CC test/nvme/compliance/nvme_compliance.o 00:02:48.230 CXX test/cpp_headers/endian.o 00:02:48.230 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.230 CXX test/cpp_headers/env_dpdk.o 00:02:48.487 LINK abort 00:02:48.487 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.487 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.487 CXX test/cpp_headers/env.o 00:02:48.487 CC test/nvme/fdp/fdp.o 00:02:48.487 CC test/nvme/cuse/cuse.o 00:02:48.487 CXX test/cpp_headers/event.o 00:02:48.487 CXX test/cpp_headers/fd_group.o 00:02:48.487 LINK fused_ordering 00:02:48.487 LINK pmr_persistence 00:02:48.487 LINK nvme_compliance 00:02:48.746 CXX test/cpp_headers/fd.o 00:02:48.746 LINK doorbell_aers 00:02:48.746 CXX test/cpp_headers/file.o 00:02:48.746 CXX test/cpp_headers/ftl.o 00:02:48.746 CXX test/cpp_headers/gpt_spec.o 00:02:48.746 CXX test/cpp_headers/hexlify.o 00:02:48.746 CXX test/cpp_headers/histogram_data.o 00:02:48.746 CXX test/cpp_headers/idxd.o 00:02:48.746 LINK fdp 00:02:49.004 CXX test/cpp_headers/idxd_spec.o 00:02:49.004 CXX test/cpp_headers/init.o 00:02:49.004 CXX test/cpp_headers/ioat.o 00:02:49.004 CXX test/cpp_headers/ioat_spec.o 00:02:49.004 CXX test/cpp_headers/iscsi_spec.o 00:02:49.004 CC examples/nvmf/nvmf/nvmf.o 00:02:49.004 CXX test/cpp_headers/json.o 00:02:49.004 CXX test/cpp_headers/jsonrpc.o 00:02:49.004 CXX test/cpp_headers/keyring.o 00:02:49.004 CXX test/cpp_headers/keyring_module.o 00:02:49.262 CXX test/cpp_headers/likely.o 00:02:49.262 CXX test/cpp_headers/log.o 00:02:49.262 CXX test/cpp_headers/lvol.o 00:02:49.262 CXX test/cpp_headers/memory.o 00:02:49.262 CXX test/cpp_headers/mmio.o 00:02:49.262 CXX test/cpp_headers/nbd.o 00:02:49.262 CXX test/cpp_headers/notify.o 00:02:49.262 CXX test/cpp_headers/nvme.o 00:02:49.262 CXX test/cpp_headers/nvme_intel.o 00:02:49.262 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.262 LINK nvmf 00:02:49.521 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.521 CXX test/cpp_headers/nvme_spec.o 00:02:49.521 CXX test/cpp_headers/nvme_zns.o 00:02:49.521 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.521 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.521 CXX test/cpp_headers/nvmf.o 00:02:49.521 CXX test/cpp_headers/nvmf_spec.o 00:02:49.521 CXX test/cpp_headers/nvmf_transport.o 00:02:49.521 CXX test/cpp_headers/opal.o 00:02:49.521 CXX test/cpp_headers/opal_spec.o 00:02:49.521 CXX test/cpp_headers/pci_ids.o 00:02:49.779 CXX test/cpp_headers/pipe.o 00:02:49.779 CXX test/cpp_headers/queue.o 00:02:49.779 CXX test/cpp_headers/reduce.o 00:02:49.779 CXX test/cpp_headers/rpc.o 00:02:49.779 CXX test/cpp_headers/scheduler.o 00:02:49.779 CXX test/cpp_headers/scsi.o 00:02:49.779 CXX test/cpp_headers/scsi_spec.o 00:02:49.779 CXX test/cpp_headers/sock.o 00:02:49.779 CXX test/cpp_headers/stdinc.o 00:02:49.779 CXX test/cpp_headers/string.o 00:02:49.779 CXX test/cpp_headers/thread.o 00:02:49.779 LINK cuse 00:02:49.779 CXX test/cpp_headers/trace.o 00:02:49.779 CXX test/cpp_headers/trace_parser.o 00:02:50.037 CXX test/cpp_headers/tree.o 00:02:50.037 CXX test/cpp_headers/ublk.o 00:02:50.037 CXX test/cpp_headers/util.o 00:02:50.037 CXX test/cpp_headers/uuid.o 00:02:50.037 CXX test/cpp_headers/version.o 00:02:50.037 CXX test/cpp_headers/vfio_user_pci.o 00:02:50.037 CXX test/cpp_headers/vfio_user_spec.o 00:02:50.037 CXX test/cpp_headers/vhost.o 00:02:50.037 CXX test/cpp_headers/vmd.o 00:02:50.037 CXX test/cpp_headers/xor.o 00:02:50.037 CXX test/cpp_headers/zipf.o 00:02:52.643 LINK esnap 00:02:52.901 00:02:52.901 real 1m4.263s 00:02:52.901 user 6m24.588s 00:02:52.901 sys 1m31.762s 00:02:52.901 12:45:08 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:52.901 12:45:08 make -- common/autotest_common.sh@10 -- $ set +x 00:02:52.901 ************************************ 00:02:52.901 END TEST make 00:02:52.901 ************************************ 00:02:52.901 12:45:08 -- common/autotest_common.sh@1142 -- $ return 0 00:02:52.901 12:45:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:52.901 12:45:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:52.901 12:45:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:52.901 12:45:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.901 12:45:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:52.901 12:45:08 -- pm/common@44 -- $ pid=5132 00:02:52.901 12:45:08 -- pm/common@50 -- $ kill -TERM 5132 00:02:52.901 12:45:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.901 12:45:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:52.901 12:45:08 -- pm/common@44 -- $ pid=5134 00:02:52.901 12:45:08 -- pm/common@50 -- $ kill -TERM 5134 00:02:52.901 12:45:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:52.901 12:45:08 -- nvmf/common.sh@7 -- # uname -s 00:02:52.901 12:45:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.901 12:45:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.901 12:45:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.901 12:45:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.901 12:45:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.901 12:45:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.901 12:45:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.901 12:45:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.901 12:45:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.901 12:45:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:53.158 12:45:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:02:53.158 12:45:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:02:53.158 12:45:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:53.158 12:45:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:53.158 12:45:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:53.158 12:45:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:53.158 12:45:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:53.158 12:45:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:53.158 12:45:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.158 12:45:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.158 12:45:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.158 12:45:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.158 12:45:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.158 12:45:08 -- paths/export.sh@5 -- # export PATH 00:02:53.158 12:45:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.158 12:45:08 -- nvmf/common.sh@47 -- # : 0 00:02:53.158 12:45:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:53.158 12:45:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:53.158 12:45:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:53.158 12:45:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:53.158 12:45:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:53.158 12:45:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:53.158 12:45:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:53.158 12:45:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:53.158 12:45:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:53.158 12:45:08 -- spdk/autotest.sh@32 -- # uname -s 00:02:53.158 12:45:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:53.158 12:45:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:53.158 12:45:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:53.158 12:45:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:53.158 12:45:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:53.158 12:45:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:53.158 12:45:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:53.158 12:45:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:53.158 12:45:09 -- spdk/autotest.sh@48 -- # udevadm_pid=52766 00:02:53.158 12:45:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:53.158 12:45:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:53.158 12:45:09 -- pm/common@17 -- # local monitor 00:02:53.158 12:45:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.158 12:45:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.158 12:45:09 -- pm/common@21 -- # date +%s 00:02:53.158 12:45:09 -- pm/common@25 -- # sleep 1 00:02:53.158 12:45:09 -- pm/common@21 -- # date +%s 00:02:53.158 12:45:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721047509 00:02:53.158 12:45:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721047509 00:02:53.158 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721047509_collect-vmstat.pm.log 00:02:53.158 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721047509_collect-cpu-load.pm.log 00:02:54.103 12:45:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:54.103 12:45:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:54.103 12:45:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:54.103 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:02:54.103 12:45:10 -- spdk/autotest.sh@59 -- # create_test_list 00:02:54.103 12:45:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:54.103 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:02:54.103 12:45:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:54.103 12:45:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:54.103 12:45:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:54.103 12:45:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:54.103 12:45:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:54.103 12:45:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:54.103 12:45:10 -- common/autotest_common.sh@1455 -- # uname 00:02:54.103 12:45:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:54.103 12:45:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:54.103 12:45:10 -- common/autotest_common.sh@1475 -- # uname 00:02:54.103 12:45:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:54.103 12:45:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:54.103 12:45:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:54.103 12:45:10 -- spdk/autotest.sh@72 -- # hash lcov 00:02:54.103 12:45:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:54.103 12:45:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:54.103 --rc lcov_branch_coverage=1 00:02:54.103 --rc lcov_function_coverage=1 00:02:54.103 --rc genhtml_branch_coverage=1 00:02:54.103 --rc genhtml_function_coverage=1 00:02:54.103 --rc genhtml_legend=1 00:02:54.103 --rc geninfo_all_blocks=1 00:02:54.103 ' 00:02:54.103 12:45:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:54.103 --rc lcov_branch_coverage=1 00:02:54.103 --rc lcov_function_coverage=1 00:02:54.103 --rc genhtml_branch_coverage=1 00:02:54.103 --rc genhtml_function_coverage=1 00:02:54.103 --rc genhtml_legend=1 00:02:54.103 --rc geninfo_all_blocks=1 00:02:54.103 ' 00:02:54.103 12:45:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:54.103 --rc lcov_branch_coverage=1 00:02:54.103 --rc lcov_function_coverage=1 00:02:54.103 --rc genhtml_branch_coverage=1 00:02:54.103 --rc genhtml_function_coverage=1 00:02:54.103 --rc genhtml_legend=1 00:02:54.103 --rc geninfo_all_blocks=1 00:02:54.103 --no-external' 00:02:54.103 12:45:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:54.103 --rc lcov_branch_coverage=1 00:02:54.103 --rc lcov_function_coverage=1 00:02:54.103 --rc genhtml_branch_coverage=1 00:02:54.103 --rc genhtml_function_coverage=1 00:02:54.103 --rc genhtml_legend=1 00:02:54.103 --rc geninfo_all_blocks=1 00:02:54.103 --no-external' 00:02:54.103 12:45:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:54.362 lcov: LCOV version 1.14 00:02:54.362 12:45:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:09.230 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:09.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:21.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:21.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:21.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:21.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:24.712 12:45:40 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:24.712 12:45:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:24.712 12:45:40 -- common/autotest_common.sh@10 -- # set +x 00:03:24.712 12:45:40 -- spdk/autotest.sh@91 -- # rm -f 00:03:24.712 12:45:40 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:25.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:25.645 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:25.645 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:25.645 12:45:41 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:25.645 12:45:41 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:25.645 12:45:41 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:25.645 12:45:41 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:25.645 12:45:41 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.645 12:45:41 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:25.645 12:45:41 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:25.645 12:45:41 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.645 12:45:41 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.645 12:45:41 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.645 12:45:41 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:25.645 12:45:41 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:25.645 12:45:41 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:25.645 12:45:41 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.646 12:45:41 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.646 12:45:41 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:25.646 12:45:41 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:25.646 12:45:41 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:25.646 12:45:41 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.646 12:45:41 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.646 12:45:41 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:25.646 12:45:41 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:25.646 12:45:41 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:25.646 12:45:41 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.646 12:45:41 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:25.646 12:45:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.646 12:45:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.646 12:45:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:25.646 12:45:41 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:25.646 12:45:41 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:25.646 No valid GPT data, bailing 00:03:25.646 12:45:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:25.646 12:45:41 -- scripts/common.sh@391 -- # pt= 00:03:25.646 12:45:41 -- scripts/common.sh@392 -- # return 1 00:03:25.646 12:45:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:25.646 1+0 records in 00:03:25.646 1+0 records out 00:03:25.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00947551 s, 111 MB/s 00:03:25.646 12:45:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.646 12:45:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.646 12:45:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:25.646 12:45:41 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:25.646 12:45:41 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:25.646 No valid GPT data, bailing 00:03:25.646 12:45:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:25.646 12:45:41 -- scripts/common.sh@391 -- # pt= 00:03:25.646 12:45:41 -- scripts/common.sh@392 -- # return 1 00:03:25.646 12:45:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:25.646 1+0 records in 00:03:25.646 1+0 records out 00:03:25.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399268 s, 263 MB/s 00:03:25.646 12:45:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.646 12:45:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.646 12:45:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:25.646 12:45:41 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:25.646 12:45:41 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:25.646 No valid GPT data, bailing 00:03:25.646 12:45:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:25.646 12:45:41 -- scripts/common.sh@391 -- # pt= 00:03:25.646 12:45:41 -- scripts/common.sh@392 -- # return 1 00:03:25.646 12:45:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:25.646 1+0 records in 00:03:25.646 1+0 records out 00:03:25.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00345928 s, 303 MB/s 00:03:25.646 12:45:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.646 12:45:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.646 12:45:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:25.646 12:45:41 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:25.646 12:45:41 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:25.909 No valid GPT data, bailing 00:03:25.909 12:45:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:25.909 12:45:41 -- scripts/common.sh@391 -- # pt= 00:03:25.909 12:45:41 -- scripts/common.sh@392 -- # return 1 00:03:25.909 12:45:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:25.909 1+0 records in 00:03:25.909 1+0 records out 00:03:25.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408388 s, 257 MB/s 00:03:25.909 12:45:41 -- spdk/autotest.sh@118 -- # sync 00:03:25.909 12:45:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:25.909 12:45:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:25.909 12:45:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:27.804 12:45:43 -- spdk/autotest.sh@124 -- # uname -s 00:03:27.804 12:45:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:27.804 12:45:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:27.804 12:45:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.804 12:45:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.804 12:45:43 -- common/autotest_common.sh@10 -- # set +x 00:03:27.804 ************************************ 00:03:27.804 START TEST setup.sh 00:03:27.804 ************************************ 00:03:27.804 12:45:43 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:27.804 * Looking for test storage... 00:03:27.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:27.804 12:45:43 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:27.804 12:45:43 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:27.804 12:45:43 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:27.804 12:45:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.804 12:45:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.804 12:45:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.804 ************************************ 00:03:27.804 START TEST acl 00:03:27.804 ************************************ 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:27.804 * Looking for test storage... 00:03:27.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:27.804 12:45:43 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:27.804 12:45:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.804 12:45:43 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:27.804 12:45:43 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:27.804 12:45:43 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:27.804 12:45:43 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:27.804 12:45:43 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:27.804 12:45:43 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.804 12:45:43 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.734 12:45:44 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.734 12:45:44 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.734 12:45:44 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.734 12:45:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.734 12:45:44 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.734 12:45:44 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.299 Hugepages 00:03:29.299 node hugesize free / total 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.299 00:03:29.299 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:29.299 12:45:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:29.299 12:45:45 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.299 12:45:45 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.299 12:45:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:29.299 ************************************ 00:03:29.299 START TEST denied 00:03:29.299 ************************************ 00:03:29.299 12:45:45 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:29.299 12:45:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:29.299 12:45:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:29.299 12:45:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.299 12:45:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:29.299 12:45:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:30.232 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.232 12:45:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.831 00:03:30.831 real 0m1.379s 00:03:30.831 user 0m0.585s 00:03:30.831 sys 0m0.757s 00:03:30.831 12:45:46 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.831 ************************************ 00:03:30.831 END TEST denied 00:03:30.831 ************************************ 00:03:30.831 12:45:46 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:30.831 12:45:46 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:30.831 12:45:46 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:30.831 12:45:46 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.831 12:45:46 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.831 12:45:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.831 ************************************ 00:03:30.831 START TEST allowed 00:03:30.831 ************************************ 00:03:30.831 12:45:46 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:30.831 12:45:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:30.831 12:45:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:30.831 12:45:46 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:30.831 12:45:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.831 12:45:46 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.764 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.764 12:45:47 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:32.341 00:03:32.341 real 0m1.463s 00:03:32.341 user 0m0.627s 00:03:32.341 sys 0m0.833s 00:03:32.341 12:45:48 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.341 ************************************ 00:03:32.341 END TEST allowed 00:03:32.341 12:45:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:32.341 ************************************ 00:03:32.341 12:45:48 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:32.341 ************************************ 00:03:32.341 END TEST acl 00:03:32.341 ************************************ 00:03:32.341 00:03:32.341 real 0m4.608s 00:03:32.341 user 0m2.038s 00:03:32.341 sys 0m2.542s 00:03:32.341 12:45:48 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.341 12:45:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.341 12:45:48 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:32.341 12:45:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:32.341 12:45:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.341 12:45:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.341 12:45:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.341 ************************************ 00:03:32.341 START TEST hugepages 00:03:32.341 ************************************ 00:03:32.341 12:45:48 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:32.341 * Looking for test storage... 00:03:32.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6019036 kB' 'MemAvailable: 7413472 kB' 'Buffers: 2436 kB' 'Cached: 1608732 kB' 'SwapCached: 0 kB' 'Active: 435288 kB' 'Inactive: 1279820 kB' 'Active(anon): 114428 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 105572 kB' 'Mapped: 48752 kB' 'Shmem: 10488 kB' 'KReclaimable: 61388 kB' 'Slab: 132676 kB' 'SReclaimable: 61388 kB' 'SUnreclaim: 71288 kB' 'KernelStack: 6284 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.341 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.624 12:45:48 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:32.624 12:45:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.624 12:45:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.624 12:45:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.624 ************************************ 00:03:32.624 START TEST default_setup 00:03:32.624 ************************************ 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.624 12:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:33.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.190 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:33.190 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100448 kB' 'MemAvailable: 9494728 kB' 'Buffers: 2436 kB' 'Cached: 1608720 kB' 'SwapCached: 0 kB' 'Active: 452048 kB' 'Inactive: 1279824 kB' 'Active(anon): 131188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122064 kB' 'Mapped: 48840 kB' 'Shmem: 10468 kB' 'KReclaimable: 61072 kB' 'Slab: 132364 kB' 'SReclaimable: 61072 kB' 'SUnreclaim: 71292 kB' 'KernelStack: 6256 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.454 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100448 kB' 'MemAvailable: 9494732 kB' 'Buffers: 2436 kB' 'Cached: 1608716 kB' 'SwapCached: 0 kB' 'Active: 451684 kB' 'Inactive: 1279828 kB' 'Active(anon): 130824 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121956 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132356 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71288 kB' 'KernelStack: 6308 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.455 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.456 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100448 kB' 'MemAvailable: 9494732 kB' 'Buffers: 2436 kB' 'Cached: 1608716 kB' 'SwapCached: 0 kB' 'Active: 451892 kB' 'Inactive: 1279828 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122152 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132352 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71284 kB' 'KernelStack: 6276 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.457 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.458 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:33.459 nr_hugepages=1024 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.459 resv_hugepages=0 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.459 surplus_hugepages=0 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.459 anon_hugepages=0 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100448 kB' 'MemAvailable: 9494732 kB' 'Buffers: 2436 kB' 'Cached: 1608716 kB' 'SwapCached: 0 kB' 'Active: 451844 kB' 'Inactive: 1279828 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122104 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132352 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71284 kB' 'KernelStack: 6260 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.459 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.460 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100448 kB' 'MemUsed: 4141524 kB' 'SwapCached: 0 kB' 'Active: 451800 kB' 'Inactive: 1279836 kB' 'Active(anon): 130940 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1611160 kB' 'Mapped: 48696 kB' 'AnonPages: 122084 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132348 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.461 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.462 node0=1024 expecting 1024 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.462 00:03:33.462 real 0m0.985s 00:03:33.462 user 0m0.507s 00:03:33.462 sys 0m0.451s 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.462 12:45:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:33.462 ************************************ 00:03:33.462 END TEST default_setup 00:03:33.462 ************************************ 00:03:33.462 12:45:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.462 12:45:49 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:33.462 12:45:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.462 12:45:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.462 12:45:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.462 ************************************ 00:03:33.462 START TEST per_node_1G_alloc 00:03:33.462 ************************************ 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.462 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.034 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.034 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.034 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152984 kB' 'MemAvailable: 10547276 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 1279836 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132364 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71296 kB' 'KernelStack: 6388 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.035 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152984 kB' 'MemAvailable: 10547276 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452116 kB' 'Inactive: 1279836 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132364 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71296 kB' 'KernelStack: 6288 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.036 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.037 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152984 kB' 'MemAvailable: 10547276 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 451948 kB' 'Inactive: 1279836 kB' 'Active(anon): 131088 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122188 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132364 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71296 kB' 'KernelStack: 6240 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.038 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.039 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.040 nr_hugepages=512 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:34.040 resv_hugepages=0 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.040 surplus_hugepages=0 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.040 anon_hugepages=0 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152984 kB' 'MemAvailable: 10547276 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452040 kB' 'Inactive: 1279836 kB' 'Active(anon): 131180 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122312 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132364 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71296 kB' 'KernelStack: 6272 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.040 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.041 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152984 kB' 'MemUsed: 3088988 kB' 'SwapCached: 0 kB' 'Active: 451964 kB' 'Inactive: 1279836 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1611160 kB' 'Mapped: 48696 kB' 'AnonPages: 122240 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132360 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.043 node0=512 expecting 512 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.043 00:03:34.043 real 0m0.515s 00:03:34.043 user 0m0.269s 00:03:34.043 sys 0m0.278s 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.043 12:45:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.043 ************************************ 00:03:34.043 END TEST per_node_1G_alloc 00:03:34.043 ************************************ 00:03:34.044 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.044 12:45:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:34.044 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.044 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.044 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.044 ************************************ 00:03:34.044 START TEST even_2G_alloc 00:03:34.044 ************************************ 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.044 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.566 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.566 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8108076 kB' 'MemAvailable: 9502368 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452480 kB' 'Inactive: 1279836 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122720 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132348 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71280 kB' 'KernelStack: 6260 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8108076 kB' 'MemAvailable: 9502368 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 451688 kB' 'Inactive: 1279836 kB' 'Active(anon): 130828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122020 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132348 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71280 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8108076 kB' 'MemAvailable: 9502368 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 451940 kB' 'Inactive: 1279836 kB' 'Active(anon): 131080 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122268 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132348 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71280 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.572 nr_hugepages=1024 00:03:34.572 resv_hugepages=0 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.572 surplus_hugepages=0 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.572 anon_hugepages=0 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8108076 kB' 'MemAvailable: 9502368 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 451964 kB' 'Inactive: 1279836 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122276 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132348 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71280 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.572 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.573 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8108476 kB' 'MemUsed: 4133496 kB' 'SwapCached: 0 kB' 'Active: 451888 kB' 'Inactive: 1279836 kB' 'Active(anon): 131028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1611160 kB' 'Mapped: 48696 kB' 'AnonPages: 122160 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132348 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.574 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.575 node0=1024 expecting 1024 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.575 00:03:34.575 real 0m0.504s 00:03:34.575 user 0m0.267s 00:03:34.575 sys 0m0.273s 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.575 12:45:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.575 ************************************ 00:03:34.575 END TEST even_2G_alloc 00:03:34.575 ************************************ 00:03:34.575 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.575 12:45:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:34.575 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.575 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.575 12:45:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.575 ************************************ 00:03:34.575 START TEST odd_alloc 00:03:34.575 ************************************ 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.575 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.144 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.144 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102884 kB' 'MemAvailable: 9497176 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452456 kB' 'Inactive: 1279836 kB' 'Active(anon): 131596 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122468 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132368 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71300 kB' 'KernelStack: 6228 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.144 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.145 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102884 kB' 'MemAvailable: 9497176 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452108 kB' 'Inactive: 1279836 kB' 'Active(anon): 131248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132364 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71296 kB' 'KernelStack: 6212 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.146 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.147 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102884 kB' 'MemAvailable: 9497176 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 451892 kB' 'Inactive: 1279836 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121904 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132396 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71328 kB' 'KernelStack: 6240 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.148 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.149 nr_hugepages=1025 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:35.149 resv_hugepages=0 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.149 surplus_hugepages=0 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.149 anon_hugepages=0 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.149 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102380 kB' 'MemAvailable: 9496672 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452108 kB' 'Inactive: 1279836 kB' 'Active(anon): 131248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132384 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71316 kB' 'KernelStack: 6292 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.150 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.151 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102156 kB' 'MemUsed: 4139816 kB' 'SwapCached: 0 kB' 'Active: 451796 kB' 'Inactive: 1279836 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1611160 kB' 'Mapped: 48700 kB' 'AnonPages: 121848 kB' 'Shmem: 10464 kB' 'KernelStack: 6276 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132380 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.152 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.153 node0=1025 expecting 1025 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:35.153 00:03:35.153 real 0m0.499s 00:03:35.153 user 0m0.253s 00:03:35.153 sys 0m0.277s 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.153 12:45:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.153 ************************************ 00:03:35.153 END TEST odd_alloc 00:03:35.153 ************************************ 00:03:35.153 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.153 12:45:51 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:35.153 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.153 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.153 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.153 ************************************ 00:03:35.153 START TEST custom_alloc 00:03:35.153 ************************************ 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.153 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.154 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.675 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.675 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9155376 kB' 'MemAvailable: 10549668 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452660 kB' 'Inactive: 1279836 kB' 'Active(anon): 131800 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122928 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132408 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6292 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.676 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9155628 kB' 'MemAvailable: 10549920 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452176 kB' 'Inactive: 1279836 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122432 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132408 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6272 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.677 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9155880 kB' 'MemAvailable: 10550172 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 451736 kB' 'Inactive: 1279836 kB' 'Active(anon): 130876 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122032 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132408 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.678 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.679 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.680 nr_hugepages=512 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:35.680 resv_hugepages=0 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.680 surplus_hugepages=0 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.680 anon_hugepages=0 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9155880 kB' 'MemAvailable: 10550172 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 451996 kB' 'Inactive: 1279836 kB' 'Active(anon): 131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132408 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.680 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.681 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9156160 kB' 'MemUsed: 3085812 kB' 'SwapCached: 0 kB' 'Active: 451968 kB' 'Inactive: 1279836 kB' 'Active(anon): 131108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1611160 kB' 'Mapped: 48700 kB' 'AnonPages: 122216 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132408 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.682 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.683 node0=512 expecting 512 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.683 00:03:35.683 real 0m0.543s 00:03:35.683 user 0m0.268s 00:03:35.683 sys 0m0.278s 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.683 12:45:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.683 ************************************ 00:03:35.683 END TEST custom_alloc 00:03:35.683 ************************************ 00:03:35.683 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.683 12:45:51 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:35.683 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.683 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.683 12:45:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.942 ************************************ 00:03:35.942 START TEST no_shrink_alloc 00:03:35.942 ************************************ 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.942 12:45:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.206 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.206 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8105064 kB' 'MemAvailable: 9499356 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452496 kB' 'Inactive: 1279836 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122760 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132416 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71348 kB' 'KernelStack: 6276 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.206 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.207 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8105064 kB' 'MemAvailable: 9499356 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452176 kB' 'Inactive: 1279836 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132400 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71332 kB' 'KernelStack: 6228 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.208 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8105768 kB' 'MemAvailable: 9500060 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452012 kB' 'Inactive: 1279836 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132408 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.209 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.210 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.211 nr_hugepages=1024 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.211 resv_hugepages=0 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.211 surplus_hugepages=0 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.211 anon_hugepages=0 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.211 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8105768 kB' 'MemAvailable: 9500060 kB' 'Buffers: 2436 kB' 'Cached: 1608724 kB' 'SwapCached: 0 kB' 'Active: 452004 kB' 'Inactive: 1279836 kB' 'Active(anon): 131144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122288 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132400 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71332 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.212 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8105768 kB' 'MemUsed: 4136204 kB' 'SwapCached: 0 kB' 'Active: 452012 kB' 'Inactive: 1279836 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1611160 kB' 'Mapped: 48700 kB' 'AnonPages: 122308 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132400 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.213 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.214 node0=1024 expecting 1024 00:03:36.214 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.215 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.215 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:36.215 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:36.215 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:36.215 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.215 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.814 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.814 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.814 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103764 kB' 'MemAvailable: 9498060 kB' 'Buffers: 2436 kB' 'Cached: 1608728 kB' 'SwapCached: 0 kB' 'Active: 452696 kB' 'Inactive: 1279840 kB' 'Active(anon): 131836 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132392 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6348 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.814 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.815 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104108 kB' 'MemAvailable: 9498404 kB' 'Buffers: 2436 kB' 'Cached: 1608728 kB' 'SwapCached: 0 kB' 'Active: 451976 kB' 'Inactive: 1279840 kB' 'Active(anon): 131116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122228 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132392 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6272 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104456 kB' 'MemAvailable: 9498752 kB' 'Buffers: 2436 kB' 'Cached: 1608728 kB' 'SwapCached: 0 kB' 'Active: 451772 kB' 'Inactive: 1279840 kB' 'Active(anon): 130912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122056 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132392 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.819 nr_hugepages=1024 00:03:36.819 resv_hugepages=0 00:03:36.819 surplus_hugepages=0 00:03:36.819 anon_hugepages=0 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104716 kB' 'MemAvailable: 9499012 kB' 'Buffers: 2436 kB' 'Cached: 1608728 kB' 'SwapCached: 0 kB' 'Active: 452064 kB' 'Inactive: 1279840 kB' 'Active(anon): 131204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122324 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132392 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104720 kB' 'MemUsed: 4137252 kB' 'SwapCached: 0 kB' 'Active: 451752 kB' 'Inactive: 1279840 kB' 'Active(anon): 130892 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1279840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1611164 kB' 'Mapped: 48700 kB' 'AnonPages: 122312 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132388 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.822 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.823 node0=1024 expecting 1024 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.823 00:03:36.823 real 0m1.050s 00:03:36.823 user 0m0.543s 00:03:36.823 sys 0m0.556s 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.823 12:45:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.823 ************************************ 00:03:36.823 END TEST no_shrink_alloc 00:03:36.823 ************************************ 00:03:36.823 12:45:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:36.823 12:45:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:36.823 00:03:36.823 real 0m4.531s 00:03:36.823 user 0m2.279s 00:03:36.823 sys 0m2.362s 00:03:36.823 12:45:52 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.823 12:45:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.823 ************************************ 00:03:36.823 END TEST hugepages 00:03:36.823 ************************************ 00:03:37.082 12:45:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.082 12:45:52 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:37.082 12:45:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.082 12:45:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.082 12:45:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.082 ************************************ 00:03:37.082 START TEST driver 00:03:37.082 ************************************ 00:03:37.082 12:45:52 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:37.082 * Looking for test storage... 00:03:37.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.082 12:45:52 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:37.082 12:45:52 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.082 12:45:52 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.657 12:45:53 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:37.657 12:45:53 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.657 12:45:53 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.657 12:45:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.657 ************************************ 00:03:37.657 START TEST guess_driver 00:03:37.657 ************************************ 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:37.657 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:37.657 Looking for driver=uio_pci_generic 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.657 12:45:53 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.221 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:38.221 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:38.221 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.479 12:45:54 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.094 00:03:39.094 real 0m1.487s 00:03:39.094 user 0m0.566s 00:03:39.094 sys 0m0.895s 00:03:39.094 12:45:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.094 12:45:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.094 ************************************ 00:03:39.094 END TEST guess_driver 00:03:39.094 ************************************ 00:03:39.094 12:45:55 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:39.094 00:03:39.094 real 0m2.179s 00:03:39.094 user 0m0.821s 00:03:39.094 sys 0m1.361s 00:03:39.094 12:45:55 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.094 12:45:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.094 ************************************ 00:03:39.094 END TEST driver 00:03:39.094 ************************************ 00:03:39.094 12:45:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:39.094 12:45:55 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:39.094 12:45:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.094 12:45:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.094 12:45:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.094 ************************************ 00:03:39.094 START TEST devices 00:03:39.094 ************************************ 00:03:39.094 12:45:55 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:39.352 * Looking for test storage... 00:03:39.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.352 12:45:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:39.352 12:45:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:39.352 12:45:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.352 12:45:55 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.919 12:45:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:39.919 12:45:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.919 12:45:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.919 12:45:55 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.919 12:45:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.919 12:45:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:39.920 12:45:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:39.920 12:45:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:39.920 12:45:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:39.920 12:45:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:39.920 No valid GPT data, bailing 00:03:40.178 12:45:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.178 12:45:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.178 12:45:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:40.178 12:45:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:40.178 12:45:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:40.178 12:45:55 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.178 12:45:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:40.178 12:45:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:40.178 12:45:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:40.178 No valid GPT data, bailing 00:03:40.178 12:45:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:40.178 12:45:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.178 12:45:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.178 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:40.178 12:45:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:40.178 12:45:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:40.178 12:45:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:40.179 No valid GPT data, bailing 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:40.179 12:45:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:40.179 12:45:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:40.179 12:45:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:40.179 No valid GPT data, bailing 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.179 12:45:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:40.179 12:45:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:40.179 12:45:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:40.179 12:45:56 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:40.179 12:45:56 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:40.179 12:45:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.179 12:45:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.179 12:45:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.437 ************************************ 00:03:40.437 START TEST nvme_mount 00:03:40.437 ************************************ 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.437 12:45:56 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:41.372 Creating new GPT entries in memory. 00:03:41.372 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.372 other utilities. 00:03:41.372 12:45:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.372 12:45:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.372 12:45:57 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.372 12:45:57 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.372 12:45:57 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:42.308 Creating new GPT entries in memory. 00:03:42.308 The operation has completed successfully. 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56947 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.308 12:45:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.566 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.566 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:42.566 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:42.566 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.566 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.566 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:42.825 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.825 12:45:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.084 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:43.084 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:43.084 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:43.084 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.084 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.343 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.601 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:43.602 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.602 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.602 12:45:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.860 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.860 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:43.860 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.860 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.860 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.860 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.118 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.118 12:45:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.118 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.118 00:03:44.118 real 0m3.883s 00:03:44.118 user 0m0.647s 00:03:44.118 sys 0m0.985s 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.118 12:46:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:44.118 ************************************ 00:03:44.118 END TEST nvme_mount 00:03:44.118 ************************************ 00:03:44.118 12:46:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:44.118 12:46:00 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:44.118 12:46:00 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.118 12:46:00 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.118 12:46:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.118 ************************************ 00:03:44.118 START TEST dm_mount 00:03:44.118 ************************************ 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.118 12:46:00 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:45.538 Creating new GPT entries in memory. 00:03:45.539 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.539 other utilities. 00:03:45.539 12:46:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.539 12:46:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.539 12:46:01 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.539 12:46:01 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.539 12:46:01 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:46.469 Creating new GPT entries in memory. 00:03:46.469 The operation has completed successfully. 00:03:46.469 12:46:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.469 12:46:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.469 12:46:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.469 12:46:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.469 12:46:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:47.452 The operation has completed successfully. 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57380 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.452 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.709 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.966 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.967 12:46:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.225 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:48.483 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:48.483 00:03:48.483 real 0m4.240s 00:03:48.483 user 0m0.464s 00:03:48.483 sys 0m0.695s 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.483 12:46:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:48.483 ************************************ 00:03:48.483 END TEST dm_mount 00:03:48.483 ************************************ 00:03:48.483 12:46:04 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:48.483 12:46:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:48.483 12:46:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:48.483 12:46:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:48.483 12:46:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.483 12:46:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:48.483 12:46:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.483 12:46:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:48.742 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:48.742 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:48.742 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:48.742 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:48.742 12:46:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:48.742 12:46:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.742 12:46:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:48.742 12:46:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.742 12:46:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:48.742 12:46:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.742 12:46:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:48.742 00:03:48.742 real 0m9.628s 00:03:48.742 user 0m1.737s 00:03:48.742 sys 0m2.276s 00:03:48.742 12:46:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.742 ************************************ 00:03:48.742 END TEST devices 00:03:48.742 12:46:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:48.742 ************************************ 00:03:48.742 12:46:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:48.742 00:03:48.742 real 0m21.229s 00:03:48.742 user 0m6.973s 00:03:48.742 sys 0m8.717s 00:03:48.742 12:46:04 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.742 12:46:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.742 ************************************ 00:03:48.742 END TEST setup.sh 00:03:48.742 ************************************ 00:03:49.033 12:46:04 -- common/autotest_common.sh@1142 -- # return 0 00:03:49.033 12:46:04 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:49.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.620 Hugepages 00:03:49.620 node hugesize free / total 00:03:49.620 node0 1048576kB 0 / 0 00:03:49.620 node0 2048kB 2048 / 2048 00:03:49.620 00:03:49.620 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.878 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:49.878 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:49.878 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:49.878 12:46:05 -- spdk/autotest.sh@130 -- # uname -s 00:03:49.878 12:46:05 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:49.878 12:46:05 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:49.878 12:46:05 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.702 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.702 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.702 12:46:06 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:52.075 12:46:07 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:52.075 12:46:07 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:52.075 12:46:07 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:52.075 12:46:07 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:52.075 12:46:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:52.075 12:46:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:52.075 12:46:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:52.075 12:46:07 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:52.075 12:46:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:52.075 12:46:07 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:52.075 12:46:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:52.075 12:46:07 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.075 Waiting for block devices as requested 00:03:52.333 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:52.333 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:52.333 12:46:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:52.333 12:46:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:52.333 12:46:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:52.333 12:46:08 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:03:52.333 12:46:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:52.333 12:46:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:52.333 12:46:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:52.333 12:46:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:03:52.333 12:46:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:03:52.333 12:46:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:03:52.333 12:46:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:03:52.333 12:46:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:52.333 12:46:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:52.333 12:46:08 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:52.333 12:46:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:52.334 12:46:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:52.334 12:46:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:52.334 12:46:08 -- common/autotest_common.sh@1557 -- # continue 00:03:52.334 12:46:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:52.334 12:46:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:52.334 12:46:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:52.334 12:46:08 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:03:52.334 12:46:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:52.334 12:46:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:52.334 12:46:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:52.334 12:46:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:52.334 12:46:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:52.334 12:46:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:52.334 12:46:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:52.334 12:46:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:52.334 12:46:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:52.334 12:46:08 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:52.334 12:46:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:52.334 12:46:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:52.334 12:46:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:52.334 12:46:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:52.334 12:46:08 -- common/autotest_common.sh@1557 -- # continue 00:03:52.334 12:46:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:52.334 12:46:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:52.334 12:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:52.591 12:46:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:52.591 12:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.591 12:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:52.591 12:46:08 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:53.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.158 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.158 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.417 12:46:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:53.417 12:46:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.417 12:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:53.417 12:46:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:53.417 12:46:09 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:53.417 12:46:09 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.417 12:46:09 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:53.417 12:46:09 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:53.417 12:46:09 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:53.417 12:46:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:53.417 12:46:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:53.417 12:46:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.417 12:46:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.417 12:46:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:53.417 12:46:09 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:53.417 12:46:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:53.417 12:46:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:53.417 12:46:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:53.417 12:46:09 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:53.417 12:46:09 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.417 12:46:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:53.417 12:46:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:53.417 12:46:09 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:53.417 12:46:09 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.417 12:46:09 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:53.417 12:46:09 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:53.417 12:46:09 -- common/autotest_common.sh@1593 -- # return 0 00:03:53.417 12:46:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:53.417 12:46:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:53.417 12:46:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.417 12:46:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.417 12:46:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:53.417 12:46:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.417 12:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:53.417 12:46:09 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:03:53.417 12:46:09 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:03:53.417 12:46:09 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:03:53.417 12:46:09 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:53.417 12:46:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.417 12:46:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.417 12:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:53.417 ************************************ 00:03:53.417 START TEST env 00:03:53.417 ************************************ 00:03:53.417 12:46:09 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:53.417 * Looking for test storage... 00:03:53.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:53.417 12:46:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.417 12:46:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.417 12:46:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.417 12:46:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.417 ************************************ 00:03:53.417 START TEST env_memory 00:03:53.417 ************************************ 00:03:53.417 12:46:09 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.676 00:03:53.676 00:03:53.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.676 http://cunit.sourceforge.net/ 00:03:53.676 00:03:53.676 00:03:53.676 Suite: memory 00:03:53.676 Test: alloc and free memory map ...[2024-07-15 12:46:09.508626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:53.676 passed 00:03:53.676 Test: mem map translation ...[2024-07-15 12:46:09.532923] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:53.676 [2024-07-15 12:46:09.532958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:53.676 [2024-07-15 12:46:09.533003] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:53.676 [2024-07-15 12:46:09.533011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.676 passed 00:03:53.676 Test: mem map registration ...[2024-07-15 12:46:09.585039] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:53.676 [2024-07-15 12:46:09.585078] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:53.676 passed 00:03:53.676 Test: mem map adjacent registrations ...passed 00:03:53.676 00:03:53.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.676 suites 1 1 n/a 0 0 00:03:53.676 tests 4 4 4 0 0 00:03:53.676 asserts 152 152 152 0 n/a 00:03:53.676 00:03:53.676 Elapsed time = 0.175 seconds 00:03:53.676 00:03:53.676 real 0m0.189s 00:03:53.676 user 0m0.177s 00:03:53.676 sys 0m0.010s 00:03:53.676 12:46:09 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.676 ************************************ 00:03:53.676 12:46:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:53.676 END TEST env_memory 00:03:53.676 ************************************ 00:03:53.676 12:46:09 env -- common/autotest_common.sh@1142 -- # return 0 00:03:53.676 12:46:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.676 12:46:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.676 12:46:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.676 12:46:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.676 ************************************ 00:03:53.676 START TEST env_vtophys 00:03:53.676 ************************************ 00:03:53.676 12:46:09 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.676 EAL: lib.eal log level changed from notice to debug 00:03:53.676 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 1 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 2 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 3 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 4 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 5 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 6 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 7 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 8 as core 0 on socket 0 00:03:53.676 EAL: Detected lcore 9 as core 0 on socket 0 00:03:53.676 EAL: Maximum logical cores by configuration: 128 00:03:53.676 EAL: Detected CPU lcores: 10 00:03:53.676 EAL: Detected NUMA nodes: 1 00:03:53.676 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:53.676 EAL: Detected shared linkage of DPDK 00:03:53.935 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.935 EAL: Selected IOVA mode 'PA' 00:03:53.935 EAL: Probing VFIO support... 00:03:53.935 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.935 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:53.935 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.935 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.935 EAL: Setting up physically contiguous memory... 00:03:53.935 EAL: Setting maximum number of open files to 524288 00:03:53.935 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.935 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.935 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.935 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.935 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.935 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.935 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.935 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.935 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.935 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.935 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.935 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.935 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.935 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.935 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.935 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.935 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.935 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.935 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.935 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.935 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.935 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.935 EAL: Hugepages will be freed exactly as allocated. 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: TSC frequency is ~2200000 KHz 00:03:53.935 EAL: Main lcore 0 is ready (tid=7f0220c29a00;cpuset=[0]) 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 0 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.935 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.935 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:53.935 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.935 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:53.935 00:03:53.935 00:03:53.935 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.935 http://cunit.sourceforge.net/ 00:03:53.935 00:03:53.935 00:03:53.935 Suite: components_suite 00:03:53.935 Test: vtophys_malloc_test ...passed 00:03:53.935 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 66MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 66MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.936 EAL: Restoring previous memory policy: 4 00:03:53.936 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.936 EAL: request: mp_malloc_sync 00:03:53.936 EAL: No shared files mode enabled, IPC is disabled 00:03:53.936 EAL: Heap on socket 0 was expanded by 130MB 00:03:54.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.194 EAL: request: mp_malloc_sync 00:03:54.194 EAL: No shared files mode enabled, IPC is disabled 00:03:54.194 EAL: Heap on socket 0 was shrunk by 130MB 00:03:54.194 EAL: Trying to obtain current memory policy. 00:03:54.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.194 EAL: Restoring previous memory policy: 4 00:03:54.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.194 EAL: request: mp_malloc_sync 00:03:54.194 EAL: No shared files mode enabled, IPC is disabled 00:03:54.194 EAL: Heap on socket 0 was expanded by 258MB 00:03:54.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.194 EAL: request: mp_malloc_sync 00:03:54.194 EAL: No shared files mode enabled, IPC is disabled 00:03:54.194 EAL: Heap on socket 0 was shrunk by 258MB 00:03:54.194 EAL: Trying to obtain current memory policy. 00:03:54.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.453 EAL: Restoring previous memory policy: 4 00:03:54.453 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.453 EAL: request: mp_malloc_sync 00:03:54.453 EAL: No shared files mode enabled, IPC is disabled 00:03:54.453 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.453 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.453 EAL: request: mp_malloc_sync 00:03:54.453 EAL: No shared files mode enabled, IPC is disabled 00:03:54.453 EAL: Heap on socket 0 was shrunk by 514MB 00:03:54.453 EAL: Trying to obtain current memory policy. 00:03:54.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.711 EAL: Restoring previous memory policy: 4 00:03:54.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.711 EAL: request: mp_malloc_sync 00:03:54.711 EAL: No shared files mode enabled, IPC is disabled 00:03:54.711 EAL: Heap on socket 0 was expanded by 1026MB 00:03:54.969 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.228 passed 00:03:55.228 00:03:55.228 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.228 suites 1 1 n/a 0 0 00:03:55.228 tests 2 2 2 0 0 00:03:55.228 asserts 5316 5316 5316 0 n/a 00:03:55.228 00:03:55.228 Elapsed time = 1.255 seconds 00:03:55.228 EAL: request: mp_malloc_sync 00:03:55.228 EAL: No shared files mode enabled, IPC is disabled 00:03:55.228 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.228 EAL: request: mp_malloc_sync 00:03:55.228 EAL: No shared files mode enabled, IPC is disabled 00:03:55.228 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.228 EAL: No shared files mode enabled, IPC is disabled 00:03:55.228 EAL: No shared files mode enabled, IPC is disabled 00:03:55.228 EAL: No shared files mode enabled, IPC is disabled 00:03:55.228 00:03:55.228 real 0m1.443s 00:03:55.228 user 0m0.784s 00:03:55.228 sys 0m0.527s 00:03:55.228 12:46:11 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.228 ************************************ 00:03:55.228 END TEST env_vtophys 00:03:55.228 ************************************ 00:03:55.228 12:46:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.228 12:46:11 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.228 12:46:11 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.228 12:46:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.228 12:46:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.228 12:46:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.228 ************************************ 00:03:55.228 START TEST env_pci 00:03:55.228 ************************************ 00:03:55.228 12:46:11 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.228 00:03:55.228 00:03:55.228 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.228 http://cunit.sourceforge.net/ 00:03:55.228 00:03:55.228 00:03:55.228 Suite: pci 00:03:55.228 Test: pci_hook ...[2024-07-15 12:46:11.213140] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58567 has claimed it 00:03:55.228 passed 00:03:55.228 00:03:55.228 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.228 suites 1 1 n/a 0 0 00:03:55.228 tests 1 1 1 0 0 00:03:55.228 asserts 25 25 25 0 n/a 00:03:55.228 00:03:55.228 Elapsed time = 0.002 seconds 00:03:55.228 EAL: Cannot find device (10000:00:01.0) 00:03:55.228 EAL: Failed to attach device on primary process 00:03:55.228 ************************************ 00:03:55.228 END TEST env_pci 00:03:55.228 ************************************ 00:03:55.228 00:03:55.228 real 0m0.022s 00:03:55.228 user 0m0.012s 00:03:55.228 sys 0m0.008s 00:03:55.228 12:46:11 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.228 12:46:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.228 12:46:11 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.228 12:46:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.228 12:46:11 env -- env/env.sh@15 -- # uname 00:03:55.228 12:46:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.228 12:46:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.228 12:46:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.228 12:46:11 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:55.228 12:46:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.228 12:46:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.228 ************************************ 00:03:55.228 START TEST env_dpdk_post_init 00:03:55.228 ************************************ 00:03:55.228 12:46:11 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.486 EAL: Detected CPU lcores: 10 00:03:55.486 EAL: Detected NUMA nodes: 1 00:03:55.486 EAL: Detected shared linkage of DPDK 00:03:55.486 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.486 EAL: Selected IOVA mode 'PA' 00:03:55.486 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.486 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:55.486 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:55.486 Starting DPDK initialization... 00:03:55.486 Starting SPDK post initialization... 00:03:55.487 SPDK NVMe probe 00:03:55.487 Attaching to 0000:00:10.0 00:03:55.487 Attaching to 0000:00:11.0 00:03:55.487 Attached to 0000:00:10.0 00:03:55.487 Attached to 0000:00:11.0 00:03:55.487 Cleaning up... 00:03:55.487 ************************************ 00:03:55.487 END TEST env_dpdk_post_init 00:03:55.487 ************************************ 00:03:55.487 00:03:55.487 real 0m0.177s 00:03:55.487 user 0m0.042s 00:03:55.487 sys 0m0.033s 00:03:55.487 12:46:11 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.487 12:46:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.487 12:46:11 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.487 12:46:11 env -- env/env.sh@26 -- # uname 00:03:55.487 12:46:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.487 12:46:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.487 12:46:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.487 12:46:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.487 12:46:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.487 ************************************ 00:03:55.487 START TEST env_mem_callbacks 00:03:55.487 ************************************ 00:03:55.487 12:46:11 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.487 EAL: Detected CPU lcores: 10 00:03:55.487 EAL: Detected NUMA nodes: 1 00:03:55.487 EAL: Detected shared linkage of DPDK 00:03:55.487 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.487 EAL: Selected IOVA mode 'PA' 00:03:55.745 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.745 00:03:55.745 00:03:55.745 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.745 http://cunit.sourceforge.net/ 00:03:55.745 00:03:55.745 00:03:55.745 Suite: memory 00:03:55.745 Test: test ... 00:03:55.745 register 0x200000200000 2097152 00:03:55.745 malloc 3145728 00:03:55.745 register 0x200000400000 4194304 00:03:55.745 buf 0x200000500000 len 3145728 PASSED 00:03:55.745 malloc 64 00:03:55.745 buf 0x2000004fff40 len 64 PASSED 00:03:55.745 malloc 4194304 00:03:55.745 register 0x200000800000 6291456 00:03:55.745 buf 0x200000a00000 len 4194304 PASSED 00:03:55.745 free 0x200000500000 3145728 00:03:55.745 free 0x2000004fff40 64 00:03:55.745 unregister 0x200000400000 4194304 PASSED 00:03:55.745 free 0x200000a00000 4194304 00:03:55.745 unregister 0x200000800000 6291456 PASSED 00:03:55.745 malloc 8388608 00:03:55.745 register 0x200000400000 10485760 00:03:55.745 buf 0x200000600000 len 8388608 PASSED 00:03:55.745 free 0x200000600000 8388608 00:03:55.745 unregister 0x200000400000 10485760 PASSED 00:03:55.745 passed 00:03:55.745 00:03:55.745 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.745 suites 1 1 n/a 0 0 00:03:55.745 tests 1 1 1 0 0 00:03:55.745 asserts 15 15 15 0 n/a 00:03:55.745 00:03:55.745 Elapsed time = 0.007 seconds 00:03:55.745 00:03:55.745 real 0m0.142s 00:03:55.745 user 0m0.015s 00:03:55.745 sys 0m0.024s 00:03:55.745 12:46:11 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.745 12:46:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:55.745 ************************************ 00:03:55.745 END TEST env_mem_callbacks 00:03:55.745 ************************************ 00:03:55.745 12:46:11 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.745 ************************************ 00:03:55.745 END TEST env 00:03:55.745 ************************************ 00:03:55.745 00:03:55.745 real 0m2.293s 00:03:55.745 user 0m1.133s 00:03:55.745 sys 0m0.811s 00:03:55.745 12:46:11 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.745 12:46:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.745 12:46:11 -- common/autotest_common.sh@1142 -- # return 0 00:03:55.745 12:46:11 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.745 12:46:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.745 12:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.745 12:46:11 -- common/autotest_common.sh@10 -- # set +x 00:03:55.745 ************************************ 00:03:55.745 START TEST rpc 00:03:55.745 ************************************ 00:03:55.745 12:46:11 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.745 * Looking for test storage... 00:03:55.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:55.745 12:46:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58677 00:03:55.745 12:46:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:55.745 12:46:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.745 12:46:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58677 00:03:55.745 12:46:11 rpc -- common/autotest_common.sh@829 -- # '[' -z 58677 ']' 00:03:55.745 12:46:11 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.745 12:46:11 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.745 12:46:11 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.745 12:46:11 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.745 12:46:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.003 [2024-07-15 12:46:11.864811] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:56.003 [2024-07-15 12:46:11.865157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58677 ] 00:03:56.003 [2024-07-15 12:46:11.999186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.262 [2024-07-15 12:46:12.137628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:56.262 [2024-07-15 12:46:12.137687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58677' to capture a snapshot of events at runtime. 00:03:56.262 [2024-07-15 12:46:12.137700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:56.262 [2024-07-15 12:46:12.137709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:56.262 [2024-07-15 12:46:12.137716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58677 for offline analysis/debug. 00:03:56.262 [2024-07-15 12:46:12.137749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.262 [2024-07-15 12:46:12.190996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:03:56.837 12:46:12 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.837 12:46:12 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:56.837 12:46:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.837 12:46:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.837 12:46:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:56.837 12:46:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:56.837 12:46:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.837 12:46:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.837 12:46:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.837 ************************************ 00:03:56.837 START TEST rpc_integrity 00:03:56.837 ************************************ 00:03:56.837 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:56.837 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.837 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.837 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.837 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.837 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.837 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:56.837 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.837 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.837 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.837 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.837 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.837 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:57.119 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.120 { 00:03:57.120 "name": "Malloc0", 00:03:57.120 "aliases": [ 00:03:57.120 "aa517cc9-3496-4684-843d-a83d32e24970" 00:03:57.120 ], 00:03:57.120 "product_name": "Malloc disk", 00:03:57.120 "block_size": 512, 00:03:57.120 "num_blocks": 16384, 00:03:57.120 "uuid": "aa517cc9-3496-4684-843d-a83d32e24970", 00:03:57.120 "assigned_rate_limits": { 00:03:57.120 "rw_ios_per_sec": 0, 00:03:57.120 "rw_mbytes_per_sec": 0, 00:03:57.120 "r_mbytes_per_sec": 0, 00:03:57.120 "w_mbytes_per_sec": 0 00:03:57.120 }, 00:03:57.120 "claimed": false, 00:03:57.120 "zoned": false, 00:03:57.120 "supported_io_types": { 00:03:57.120 "read": true, 00:03:57.120 "write": true, 00:03:57.120 "unmap": true, 00:03:57.120 "flush": true, 00:03:57.120 "reset": true, 00:03:57.120 "nvme_admin": false, 00:03:57.120 "nvme_io": false, 00:03:57.120 "nvme_io_md": false, 00:03:57.120 "write_zeroes": true, 00:03:57.120 "zcopy": true, 00:03:57.120 "get_zone_info": false, 00:03:57.120 "zone_management": false, 00:03:57.120 "zone_append": false, 00:03:57.120 "compare": false, 00:03:57.120 "compare_and_write": false, 00:03:57.120 "abort": true, 00:03:57.120 "seek_hole": false, 00:03:57.120 "seek_data": false, 00:03:57.120 "copy": true, 00:03:57.120 "nvme_iov_md": false 00:03:57.120 }, 00:03:57.120 "memory_domains": [ 00:03:57.120 { 00:03:57.120 "dma_device_id": "system", 00:03:57.120 "dma_device_type": 1 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.120 "dma_device_type": 2 00:03:57.120 } 00:03:57.120 ], 00:03:57.120 "driver_specific": {} 00:03:57.120 } 00:03:57.120 ]' 00:03:57.120 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.120 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.120 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 [2024-07-15 12:46:12.975434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:57.120 [2024-07-15 12:46:12.975491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.120 [2024-07-15 12:46:12.975513] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17a9da0 00:03:57.120 [2024-07-15 12:46:12.975523] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.120 [2024-07-15 12:46:12.977249] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.120 [2024-07-15 12:46:12.977292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.120 Passthru0 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 12:46:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 12:46:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.120 { 00:03:57.120 "name": "Malloc0", 00:03:57.120 "aliases": [ 00:03:57.120 "aa517cc9-3496-4684-843d-a83d32e24970" 00:03:57.120 ], 00:03:57.120 "product_name": "Malloc disk", 00:03:57.120 "block_size": 512, 00:03:57.120 "num_blocks": 16384, 00:03:57.120 "uuid": "aa517cc9-3496-4684-843d-a83d32e24970", 00:03:57.120 "assigned_rate_limits": { 00:03:57.120 "rw_ios_per_sec": 0, 00:03:57.120 "rw_mbytes_per_sec": 0, 00:03:57.120 "r_mbytes_per_sec": 0, 00:03:57.120 "w_mbytes_per_sec": 0 00:03:57.120 }, 00:03:57.120 "claimed": true, 00:03:57.120 "claim_type": "exclusive_write", 00:03:57.120 "zoned": false, 00:03:57.120 "supported_io_types": { 00:03:57.120 "read": true, 00:03:57.120 "write": true, 00:03:57.120 "unmap": true, 00:03:57.120 "flush": true, 00:03:57.120 "reset": true, 00:03:57.120 "nvme_admin": false, 00:03:57.120 "nvme_io": false, 00:03:57.120 "nvme_io_md": false, 00:03:57.120 "write_zeroes": true, 00:03:57.120 "zcopy": true, 00:03:57.120 "get_zone_info": false, 00:03:57.120 "zone_management": false, 00:03:57.120 "zone_append": false, 00:03:57.120 "compare": false, 00:03:57.120 "compare_and_write": false, 00:03:57.120 "abort": true, 00:03:57.120 "seek_hole": false, 00:03:57.120 "seek_data": false, 00:03:57.120 "copy": true, 00:03:57.120 "nvme_iov_md": false 00:03:57.120 }, 00:03:57.120 "memory_domains": [ 00:03:57.120 { 00:03:57.120 "dma_device_id": "system", 00:03:57.120 "dma_device_type": 1 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.120 "dma_device_type": 2 00:03:57.120 } 00:03:57.120 ], 00:03:57.120 "driver_specific": {} 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "name": "Passthru0", 00:03:57.120 "aliases": [ 00:03:57.120 "2167df33-328c-54b0-a5fd-43aee6e6d4f4" 00:03:57.120 ], 00:03:57.120 "product_name": "passthru", 00:03:57.120 "block_size": 512, 00:03:57.120 "num_blocks": 16384, 00:03:57.120 "uuid": "2167df33-328c-54b0-a5fd-43aee6e6d4f4", 00:03:57.120 "assigned_rate_limits": { 00:03:57.120 "rw_ios_per_sec": 0, 00:03:57.120 "rw_mbytes_per_sec": 0, 00:03:57.120 "r_mbytes_per_sec": 0, 00:03:57.120 "w_mbytes_per_sec": 0 00:03:57.120 }, 00:03:57.120 "claimed": false, 00:03:57.120 "zoned": false, 00:03:57.120 "supported_io_types": { 00:03:57.120 "read": true, 00:03:57.120 "write": true, 00:03:57.120 "unmap": true, 00:03:57.120 "flush": true, 00:03:57.120 "reset": true, 00:03:57.120 "nvme_admin": false, 00:03:57.120 "nvme_io": false, 00:03:57.120 "nvme_io_md": false, 00:03:57.120 "write_zeroes": true, 00:03:57.120 "zcopy": true, 00:03:57.120 "get_zone_info": false, 00:03:57.120 "zone_management": false, 00:03:57.120 "zone_append": false, 00:03:57.120 "compare": false, 00:03:57.120 "compare_and_write": false, 00:03:57.120 "abort": true, 00:03:57.120 "seek_hole": false, 00:03:57.120 "seek_data": false, 00:03:57.120 "copy": true, 00:03:57.120 "nvme_iov_md": false 00:03:57.120 }, 00:03:57.120 "memory_domains": [ 00:03:57.120 { 00:03:57.120 "dma_device_id": "system", 00:03:57.120 "dma_device_type": 1 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.120 "dma_device_type": 2 00:03:57.120 } 00:03:57.120 ], 00:03:57.120 "driver_specific": { 00:03:57.120 "passthru": { 00:03:57.120 "name": "Passthru0", 00:03:57.120 "base_bdev_name": "Malloc0" 00:03:57.120 } 00:03:57.120 } 00:03:57.120 } 00:03:57.120 ]' 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.120 ************************************ 00:03:57.120 END TEST rpc_integrity 00:03:57.120 ************************************ 00:03:57.120 12:46:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.120 00:03:57.120 real 0m0.318s 00:03:57.120 user 0m0.212s 00:03:57.120 sys 0m0.041s 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.120 12:46:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.394 12:46:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.394 12:46:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:57.394 12:46:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.394 12:46:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.394 12:46:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.394 ************************************ 00:03:57.394 START TEST rpc_plugins 00:03:57.394 ************************************ 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:57.394 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.394 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.394 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.394 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.394 { 00:03:57.394 "name": "Malloc1", 00:03:57.394 "aliases": [ 00:03:57.394 "0a04d62f-b7fe-48c7-91e7-549e9548a70e" 00:03:57.394 ], 00:03:57.394 "product_name": "Malloc disk", 00:03:57.394 "block_size": 4096, 00:03:57.394 "num_blocks": 256, 00:03:57.394 "uuid": "0a04d62f-b7fe-48c7-91e7-549e9548a70e", 00:03:57.394 "assigned_rate_limits": { 00:03:57.394 "rw_ios_per_sec": 0, 00:03:57.394 "rw_mbytes_per_sec": 0, 00:03:57.394 "r_mbytes_per_sec": 0, 00:03:57.394 "w_mbytes_per_sec": 0 00:03:57.394 }, 00:03:57.394 "claimed": false, 00:03:57.394 "zoned": false, 00:03:57.394 "supported_io_types": { 00:03:57.394 "read": true, 00:03:57.394 "write": true, 00:03:57.394 "unmap": true, 00:03:57.394 "flush": true, 00:03:57.394 "reset": true, 00:03:57.394 "nvme_admin": false, 00:03:57.394 "nvme_io": false, 00:03:57.394 "nvme_io_md": false, 00:03:57.394 "write_zeroes": true, 00:03:57.394 "zcopy": true, 00:03:57.394 "get_zone_info": false, 00:03:57.394 "zone_management": false, 00:03:57.394 "zone_append": false, 00:03:57.394 "compare": false, 00:03:57.394 "compare_and_write": false, 00:03:57.394 "abort": true, 00:03:57.394 "seek_hole": false, 00:03:57.394 "seek_data": false, 00:03:57.394 "copy": true, 00:03:57.394 "nvme_iov_md": false 00:03:57.394 }, 00:03:57.394 "memory_domains": [ 00:03:57.394 { 00:03:57.394 "dma_device_id": "system", 00:03:57.394 "dma_device_type": 1 00:03:57.394 }, 00:03:57.394 { 00:03:57.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.394 "dma_device_type": 2 00:03:57.394 } 00:03:57.394 ], 00:03:57.394 "driver_specific": {} 00:03:57.394 } 00:03:57.394 ]' 00:03:57.394 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:57.394 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.394 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.394 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.395 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.395 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.395 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.395 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.395 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.395 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:57.395 ************************************ 00:03:57.395 END TEST rpc_plugins 00:03:57.395 ************************************ 00:03:57.395 12:46:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:57.395 00:03:57.395 real 0m0.149s 00:03:57.395 user 0m0.092s 00:03:57.395 sys 0m0.020s 00:03:57.395 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.395 12:46:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.395 12:46:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.395 12:46:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:57.395 12:46:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.395 12:46:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.395 12:46:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.395 ************************************ 00:03:57.395 START TEST rpc_trace_cmd_test 00:03:57.395 ************************************ 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:57.395 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58677", 00:03:57.395 "tpoint_group_mask": "0x8", 00:03:57.395 "iscsi_conn": { 00:03:57.395 "mask": "0x2", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "scsi": { 00:03:57.395 "mask": "0x4", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "bdev": { 00:03:57.395 "mask": "0x8", 00:03:57.395 "tpoint_mask": "0xffffffffffffffff" 00:03:57.395 }, 00:03:57.395 "nvmf_rdma": { 00:03:57.395 "mask": "0x10", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "nvmf_tcp": { 00:03:57.395 "mask": "0x20", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "ftl": { 00:03:57.395 "mask": "0x40", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "blobfs": { 00:03:57.395 "mask": "0x80", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "dsa": { 00:03:57.395 "mask": "0x200", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "thread": { 00:03:57.395 "mask": "0x400", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "nvme_pcie": { 00:03:57.395 "mask": "0x800", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "iaa": { 00:03:57.395 "mask": "0x1000", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "nvme_tcp": { 00:03:57.395 "mask": "0x2000", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "bdev_nvme": { 00:03:57.395 "mask": "0x4000", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 }, 00:03:57.395 "sock": { 00:03:57.395 "mask": "0x8000", 00:03:57.395 "tpoint_mask": "0x0" 00:03:57.395 } 00:03:57.395 }' 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:57.395 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:57.653 ************************************ 00:03:57.653 END TEST rpc_trace_cmd_test 00:03:57.653 ************************************ 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:57.653 00:03:57.653 real 0m0.267s 00:03:57.653 user 0m0.230s 00:03:57.653 sys 0m0.026s 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.653 12:46:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.653 12:46:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.653 12:46:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:57.653 12:46:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:57.653 12:46:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:57.653 12:46:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.653 12:46:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.653 12:46:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.653 ************************************ 00:03:57.653 START TEST rpc_daemon_integrity 00:03:57.653 ************************************ 00:03:57.653 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:57.653 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.653 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.653 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.653 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.653 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.653 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.911 { 00:03:57.911 "name": "Malloc2", 00:03:57.911 "aliases": [ 00:03:57.911 "2b381189-1598-4411-995a-cc4146a1ebbb" 00:03:57.911 ], 00:03:57.911 "product_name": "Malloc disk", 00:03:57.911 "block_size": 512, 00:03:57.911 "num_blocks": 16384, 00:03:57.911 "uuid": "2b381189-1598-4411-995a-cc4146a1ebbb", 00:03:57.911 "assigned_rate_limits": { 00:03:57.911 "rw_ios_per_sec": 0, 00:03:57.911 "rw_mbytes_per_sec": 0, 00:03:57.911 "r_mbytes_per_sec": 0, 00:03:57.911 "w_mbytes_per_sec": 0 00:03:57.911 }, 00:03:57.911 "claimed": false, 00:03:57.911 "zoned": false, 00:03:57.911 "supported_io_types": { 00:03:57.911 "read": true, 00:03:57.911 "write": true, 00:03:57.911 "unmap": true, 00:03:57.911 "flush": true, 00:03:57.911 "reset": true, 00:03:57.911 "nvme_admin": false, 00:03:57.911 "nvme_io": false, 00:03:57.911 "nvme_io_md": false, 00:03:57.911 "write_zeroes": true, 00:03:57.911 "zcopy": true, 00:03:57.911 "get_zone_info": false, 00:03:57.911 "zone_management": false, 00:03:57.911 "zone_append": false, 00:03:57.911 "compare": false, 00:03:57.911 "compare_and_write": false, 00:03:57.911 "abort": true, 00:03:57.911 "seek_hole": false, 00:03:57.911 "seek_data": false, 00:03:57.911 "copy": true, 00:03:57.911 "nvme_iov_md": false 00:03:57.911 }, 00:03:57.911 "memory_domains": [ 00:03:57.911 { 00:03:57.911 "dma_device_id": "system", 00:03:57.911 "dma_device_type": 1 00:03:57.911 }, 00:03:57.911 { 00:03:57.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.911 "dma_device_type": 2 00:03:57.911 } 00:03:57.911 ], 00:03:57.911 "driver_specific": {} 00:03:57.911 } 00:03:57.911 ]' 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.911 [2024-07-15 12:46:13.848069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:57.911 [2024-07-15 12:46:13.848129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.911 [2024-07-15 12:46:13.848152] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x180ebe0 00:03:57.911 [2024-07-15 12:46:13.848162] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.911 [2024-07-15 12:46:13.849832] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.911 [2024-07-15 12:46:13.849870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.911 Passthru0 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.911 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.911 { 00:03:57.911 "name": "Malloc2", 00:03:57.911 "aliases": [ 00:03:57.911 "2b381189-1598-4411-995a-cc4146a1ebbb" 00:03:57.911 ], 00:03:57.911 "product_name": "Malloc disk", 00:03:57.911 "block_size": 512, 00:03:57.911 "num_blocks": 16384, 00:03:57.911 "uuid": "2b381189-1598-4411-995a-cc4146a1ebbb", 00:03:57.911 "assigned_rate_limits": { 00:03:57.911 "rw_ios_per_sec": 0, 00:03:57.911 "rw_mbytes_per_sec": 0, 00:03:57.911 "r_mbytes_per_sec": 0, 00:03:57.911 "w_mbytes_per_sec": 0 00:03:57.911 }, 00:03:57.911 "claimed": true, 00:03:57.911 "claim_type": "exclusive_write", 00:03:57.911 "zoned": false, 00:03:57.911 "supported_io_types": { 00:03:57.911 "read": true, 00:03:57.911 "write": true, 00:03:57.911 "unmap": true, 00:03:57.911 "flush": true, 00:03:57.911 "reset": true, 00:03:57.911 "nvme_admin": false, 00:03:57.911 "nvme_io": false, 00:03:57.911 "nvme_io_md": false, 00:03:57.911 "write_zeroes": true, 00:03:57.911 "zcopy": true, 00:03:57.911 "get_zone_info": false, 00:03:57.911 "zone_management": false, 00:03:57.911 "zone_append": false, 00:03:57.911 "compare": false, 00:03:57.911 "compare_and_write": false, 00:03:57.911 "abort": true, 00:03:57.911 "seek_hole": false, 00:03:57.911 "seek_data": false, 00:03:57.911 "copy": true, 00:03:57.911 "nvme_iov_md": false 00:03:57.911 }, 00:03:57.911 "memory_domains": [ 00:03:57.911 { 00:03:57.911 "dma_device_id": "system", 00:03:57.911 "dma_device_type": 1 00:03:57.911 }, 00:03:57.911 { 00:03:57.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.911 "dma_device_type": 2 00:03:57.911 } 00:03:57.911 ], 00:03:57.911 "driver_specific": {} 00:03:57.911 }, 00:03:57.911 { 00:03:57.911 "name": "Passthru0", 00:03:57.911 "aliases": [ 00:03:57.911 "3e5524ab-d160-5231-8ef0-3ae7c99db292" 00:03:57.911 ], 00:03:57.911 "product_name": "passthru", 00:03:57.911 "block_size": 512, 00:03:57.911 "num_blocks": 16384, 00:03:57.911 "uuid": "3e5524ab-d160-5231-8ef0-3ae7c99db292", 00:03:57.911 "assigned_rate_limits": { 00:03:57.911 "rw_ios_per_sec": 0, 00:03:57.911 "rw_mbytes_per_sec": 0, 00:03:57.911 "r_mbytes_per_sec": 0, 00:03:57.911 "w_mbytes_per_sec": 0 00:03:57.911 }, 00:03:57.911 "claimed": false, 00:03:57.911 "zoned": false, 00:03:57.911 "supported_io_types": { 00:03:57.911 "read": true, 00:03:57.911 "write": true, 00:03:57.911 "unmap": true, 00:03:57.912 "flush": true, 00:03:57.912 "reset": true, 00:03:57.912 "nvme_admin": false, 00:03:57.912 "nvme_io": false, 00:03:57.912 "nvme_io_md": false, 00:03:57.912 "write_zeroes": true, 00:03:57.912 "zcopy": true, 00:03:57.912 "get_zone_info": false, 00:03:57.912 "zone_management": false, 00:03:57.912 "zone_append": false, 00:03:57.912 "compare": false, 00:03:57.912 "compare_and_write": false, 00:03:57.912 "abort": true, 00:03:57.912 "seek_hole": false, 00:03:57.912 "seek_data": false, 00:03:57.912 "copy": true, 00:03:57.912 "nvme_iov_md": false 00:03:57.912 }, 00:03:57.912 "memory_domains": [ 00:03:57.912 { 00:03:57.912 "dma_device_id": "system", 00:03:57.912 "dma_device_type": 1 00:03:57.912 }, 00:03:57.912 { 00:03:57.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.912 "dma_device_type": 2 00:03:57.912 } 00:03:57.912 ], 00:03:57.912 "driver_specific": { 00:03:57.912 "passthru": { 00:03:57.912 "name": "Passthru0", 00:03:57.912 "base_bdev_name": "Malloc2" 00:03:57.912 } 00:03:57.912 } 00:03:57.912 } 00:03:57.912 ]' 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.912 12:46:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.169 ************************************ 00:03:58.169 END TEST rpc_daemon_integrity 00:03:58.169 ************************************ 00:03:58.169 12:46:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.169 00:03:58.169 real 0m0.324s 00:03:58.169 user 0m0.218s 00:03:58.169 sys 0m0.041s 00:03:58.169 12:46:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.169 12:46:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.169 12:46:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:58.169 12:46:14 rpc -- rpc/rpc.sh@84 -- # killprocess 58677 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@948 -- # '[' -z 58677 ']' 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@952 -- # kill -0 58677 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@953 -- # uname 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58677 00:03:58.169 killing process with pid 58677 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58677' 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@967 -- # kill 58677 00:03:58.169 12:46:14 rpc -- common/autotest_common.sh@972 -- # wait 58677 00:03:58.426 00:03:58.426 real 0m2.744s 00:03:58.426 user 0m3.571s 00:03:58.426 sys 0m0.629s 00:03:58.426 ************************************ 00:03:58.426 END TEST rpc 00:03:58.426 ************************************ 00:03:58.426 12:46:14 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.426 12:46:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.683 12:46:14 -- common/autotest_common.sh@1142 -- # return 0 00:03:58.683 12:46:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:58.683 12:46:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.683 12:46:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.683 12:46:14 -- common/autotest_common.sh@10 -- # set +x 00:03:58.683 ************************************ 00:03:58.683 START TEST skip_rpc 00:03:58.683 ************************************ 00:03:58.683 12:46:14 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:58.683 * Looking for test storage... 00:03:58.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.683 12:46:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:58.683 12:46:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:58.683 12:46:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:58.683 12:46:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.683 12:46:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.683 12:46:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.683 ************************************ 00:03:58.683 START TEST skip_rpc 00:03:58.683 ************************************ 00:03:58.683 12:46:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:58.683 12:46:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58876 00:03:58.683 12:46:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.683 12:46:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:58.683 12:46:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:58.683 [2024-07-15 12:46:14.661528] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:58.684 [2024-07-15 12:46:14.661615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58876 ] 00:03:58.941 [2024-07-15 12:46:14.794969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.941 [2024-07-15 12:46:14.909453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.941 [2024-07-15 12:46:14.963602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58876 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58876 ']' 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58876 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58876 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.311 killing process with pid 58876 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58876' 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58876 00:04:04.311 12:46:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58876 00:04:04.311 00:04:04.311 real 0m5.416s 00:04:04.311 user 0m5.039s 00:04:04.311 sys 0m0.271s 00:04:04.311 ************************************ 00:04:04.311 END TEST skip_rpc 00:04:04.311 ************************************ 00:04:04.311 12:46:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.311 12:46:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.311 12:46:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:04.311 12:46:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:04.311 12:46:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.311 12:46:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.311 12:46:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.311 ************************************ 00:04:04.311 START TEST skip_rpc_with_json 00:04:04.311 ************************************ 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58957 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58957 00:04:04.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 58957 ']' 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.311 12:46:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.311 [2024-07-15 12:46:20.147476] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:04.311 [2024-07-15 12:46:20.147601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:04:04.311 [2024-07-15 12:46:20.293275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.570 [2024-07-15 12:46:20.404328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.570 [2024-07-15 12:46:20.459328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.137 [2024-07-15 12:46:21.134032] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:05.137 request: 00:04:05.137 { 00:04:05.137 "trtype": "tcp", 00:04:05.137 "method": "nvmf_get_transports", 00:04:05.137 "req_id": 1 00:04:05.137 } 00:04:05.137 Got JSON-RPC error response 00:04:05.137 response: 00:04:05.137 { 00:04:05.137 "code": -19, 00:04:05.137 "message": "No such device" 00:04:05.137 } 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.137 [2024-07-15 12:46:21.146143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.137 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.394 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.394 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:05.394 { 00:04:05.394 "subsystems": [ 00:04:05.394 { 00:04:05.394 "subsystem": "keyring", 00:04:05.394 "config": [] 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "subsystem": "iobuf", 00:04:05.394 "config": [ 00:04:05.394 { 00:04:05.394 "method": "iobuf_set_options", 00:04:05.394 "params": { 00:04:05.394 "small_pool_count": 8192, 00:04:05.394 "large_pool_count": 1024, 00:04:05.394 "small_bufsize": 8192, 00:04:05.394 "large_bufsize": 135168 00:04:05.394 } 00:04:05.394 } 00:04:05.394 ] 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "subsystem": "sock", 00:04:05.394 "config": [ 00:04:05.394 { 00:04:05.394 "method": "sock_set_default_impl", 00:04:05.394 "params": { 00:04:05.394 "impl_name": "uring" 00:04:05.394 } 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "method": "sock_impl_set_options", 00:04:05.394 "params": { 00:04:05.394 "impl_name": "ssl", 00:04:05.394 "recv_buf_size": 4096, 00:04:05.394 "send_buf_size": 4096, 00:04:05.394 "enable_recv_pipe": true, 00:04:05.394 "enable_quickack": false, 00:04:05.394 "enable_placement_id": 0, 00:04:05.394 "enable_zerocopy_send_server": true, 00:04:05.394 "enable_zerocopy_send_client": false, 00:04:05.394 "zerocopy_threshold": 0, 00:04:05.394 "tls_version": 0, 00:04:05.394 "enable_ktls": false 00:04:05.394 } 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "method": "sock_impl_set_options", 00:04:05.394 "params": { 00:04:05.394 "impl_name": "posix", 00:04:05.394 "recv_buf_size": 2097152, 00:04:05.394 "send_buf_size": 2097152, 00:04:05.394 "enable_recv_pipe": true, 00:04:05.394 "enable_quickack": false, 00:04:05.394 "enable_placement_id": 0, 00:04:05.394 "enable_zerocopy_send_server": true, 00:04:05.394 "enable_zerocopy_send_client": false, 00:04:05.394 "zerocopy_threshold": 0, 00:04:05.394 "tls_version": 0, 00:04:05.394 "enable_ktls": false 00:04:05.394 } 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "method": "sock_impl_set_options", 00:04:05.394 "params": { 00:04:05.394 "impl_name": "uring", 00:04:05.394 "recv_buf_size": 2097152, 00:04:05.394 "send_buf_size": 2097152, 00:04:05.394 "enable_recv_pipe": true, 00:04:05.394 "enable_quickack": false, 00:04:05.394 "enable_placement_id": 0, 00:04:05.394 "enable_zerocopy_send_server": false, 00:04:05.394 "enable_zerocopy_send_client": false, 00:04:05.394 "zerocopy_threshold": 0, 00:04:05.394 "tls_version": 0, 00:04:05.394 "enable_ktls": false 00:04:05.394 } 00:04:05.394 } 00:04:05.394 ] 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "subsystem": "vmd", 00:04:05.394 "config": [] 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "subsystem": "accel", 00:04:05.394 "config": [ 00:04:05.394 { 00:04:05.394 "method": "accel_set_options", 00:04:05.394 "params": { 00:04:05.394 "small_cache_size": 128, 00:04:05.394 "large_cache_size": 16, 00:04:05.394 "task_count": 2048, 00:04:05.394 "sequence_count": 2048, 00:04:05.394 "buf_count": 2048 00:04:05.394 } 00:04:05.394 } 00:04:05.394 ] 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "subsystem": "bdev", 00:04:05.394 "config": [ 00:04:05.394 { 00:04:05.394 "method": "bdev_set_options", 00:04:05.394 "params": { 00:04:05.394 "bdev_io_pool_size": 65535, 00:04:05.394 "bdev_io_cache_size": 256, 00:04:05.394 "bdev_auto_examine": true, 00:04:05.394 "iobuf_small_cache_size": 128, 00:04:05.394 "iobuf_large_cache_size": 16 00:04:05.394 } 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "method": "bdev_raid_set_options", 00:04:05.394 "params": { 00:04:05.394 "process_window_size_kb": 1024 00:04:05.394 } 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "method": "bdev_iscsi_set_options", 00:04:05.394 "params": { 00:04:05.394 "timeout_sec": 30 00:04:05.394 } 00:04:05.394 }, 00:04:05.394 { 00:04:05.394 "method": "bdev_nvme_set_options", 00:04:05.394 "params": { 00:04:05.394 "action_on_timeout": "none", 00:04:05.394 "timeout_us": 0, 00:04:05.394 "timeout_admin_us": 0, 00:04:05.394 "keep_alive_timeout_ms": 10000, 00:04:05.395 "arbitration_burst": 0, 00:04:05.395 "low_priority_weight": 0, 00:04:05.395 "medium_priority_weight": 0, 00:04:05.395 "high_priority_weight": 0, 00:04:05.395 "nvme_adminq_poll_period_us": 10000, 00:04:05.395 "nvme_ioq_poll_period_us": 0, 00:04:05.395 "io_queue_requests": 0, 00:04:05.395 "delay_cmd_submit": true, 00:04:05.395 "transport_retry_count": 4, 00:04:05.395 "bdev_retry_count": 3, 00:04:05.395 "transport_ack_timeout": 0, 00:04:05.395 "ctrlr_loss_timeout_sec": 0, 00:04:05.395 "reconnect_delay_sec": 0, 00:04:05.395 "fast_io_fail_timeout_sec": 0, 00:04:05.395 "disable_auto_failback": false, 00:04:05.395 "generate_uuids": false, 00:04:05.395 "transport_tos": 0, 00:04:05.395 "nvme_error_stat": false, 00:04:05.395 "rdma_srq_size": 0, 00:04:05.395 "io_path_stat": false, 00:04:05.395 "allow_accel_sequence": false, 00:04:05.395 "rdma_max_cq_size": 0, 00:04:05.395 "rdma_cm_event_timeout_ms": 0, 00:04:05.395 "dhchap_digests": [ 00:04:05.395 "sha256", 00:04:05.395 "sha384", 00:04:05.395 "sha512" 00:04:05.395 ], 00:04:05.395 "dhchap_dhgroups": [ 00:04:05.395 "null", 00:04:05.395 "ffdhe2048", 00:04:05.395 "ffdhe3072", 00:04:05.395 "ffdhe4096", 00:04:05.395 "ffdhe6144", 00:04:05.395 "ffdhe8192" 00:04:05.395 ] 00:04:05.395 } 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "method": "bdev_nvme_set_hotplug", 00:04:05.395 "params": { 00:04:05.395 "period_us": 100000, 00:04:05.395 "enable": false 00:04:05.395 } 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "method": "bdev_wait_for_examine" 00:04:05.395 } 00:04:05.395 ] 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "scsi", 00:04:05.395 "config": null 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "scheduler", 00:04:05.395 "config": [ 00:04:05.395 { 00:04:05.395 "method": "framework_set_scheduler", 00:04:05.395 "params": { 00:04:05.395 "name": "static" 00:04:05.395 } 00:04:05.395 } 00:04:05.395 ] 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "vhost_scsi", 00:04:05.395 "config": [] 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "vhost_blk", 00:04:05.395 "config": [] 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "ublk", 00:04:05.395 "config": [] 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "nbd", 00:04:05.395 "config": [] 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "nvmf", 00:04:05.395 "config": [ 00:04:05.395 { 00:04:05.395 "method": "nvmf_set_config", 00:04:05.395 "params": { 00:04:05.395 "discovery_filter": "match_any", 00:04:05.395 "admin_cmd_passthru": { 00:04:05.395 "identify_ctrlr": false 00:04:05.395 } 00:04:05.395 } 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "method": "nvmf_set_max_subsystems", 00:04:05.395 "params": { 00:04:05.395 "max_subsystems": 1024 00:04:05.395 } 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "method": "nvmf_set_crdt", 00:04:05.395 "params": { 00:04:05.395 "crdt1": 0, 00:04:05.395 "crdt2": 0, 00:04:05.395 "crdt3": 0 00:04:05.395 } 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "method": "nvmf_create_transport", 00:04:05.395 "params": { 00:04:05.395 "trtype": "TCP", 00:04:05.395 "max_queue_depth": 128, 00:04:05.395 "max_io_qpairs_per_ctrlr": 127, 00:04:05.395 "in_capsule_data_size": 4096, 00:04:05.395 "max_io_size": 131072, 00:04:05.395 "io_unit_size": 131072, 00:04:05.395 "max_aq_depth": 128, 00:04:05.395 "num_shared_buffers": 511, 00:04:05.395 "buf_cache_size": 4294967295, 00:04:05.395 "dif_insert_or_strip": false, 00:04:05.395 "zcopy": false, 00:04:05.395 "c2h_success": true, 00:04:05.395 "sock_priority": 0, 00:04:05.395 "abort_timeout_sec": 1, 00:04:05.395 "ack_timeout": 0, 00:04:05.395 "data_wr_pool_size": 0 00:04:05.395 } 00:04:05.395 } 00:04:05.395 ] 00:04:05.395 }, 00:04:05.395 { 00:04:05.395 "subsystem": "iscsi", 00:04:05.395 "config": [ 00:04:05.395 { 00:04:05.395 "method": "iscsi_set_options", 00:04:05.395 "params": { 00:04:05.395 "node_base": "iqn.2016-06.io.spdk", 00:04:05.395 "max_sessions": 128, 00:04:05.395 "max_connections_per_session": 2, 00:04:05.395 "max_queue_depth": 64, 00:04:05.395 "default_time2wait": 2, 00:04:05.395 "default_time2retain": 20, 00:04:05.395 "first_burst_length": 8192, 00:04:05.395 "immediate_data": true, 00:04:05.395 "allow_duplicated_isid": false, 00:04:05.395 "error_recovery_level": 0, 00:04:05.395 "nop_timeout": 60, 00:04:05.395 "nop_in_interval": 30, 00:04:05.395 "disable_chap": false, 00:04:05.395 "require_chap": false, 00:04:05.395 "mutual_chap": false, 00:04:05.395 "chap_group": 0, 00:04:05.395 "max_large_datain_per_connection": 64, 00:04:05.395 "max_r2t_per_connection": 4, 00:04:05.395 "pdu_pool_size": 36864, 00:04:05.395 "immediate_data_pool_size": 16384, 00:04:05.395 "data_out_pool_size": 2048 00:04:05.395 } 00:04:05.395 } 00:04:05.395 ] 00:04:05.395 } 00:04:05.395 ] 00:04:05.395 } 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58957 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58957 ']' 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58957 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58957 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58957' 00:04:05.395 killing process with pid 58957 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58957 00:04:05.395 12:46:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58957 00:04:05.960 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58990 00:04:05.960 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:05.960 12:46:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58990 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58990 ']' 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58990 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58990 00:04:11.232 killing process with pid 58990 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58990' 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58990 00:04:11.232 12:46:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58990 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.232 00:04:11.232 real 0m7.077s 00:04:11.232 user 0m6.832s 00:04:11.232 sys 0m0.633s 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.232 ************************************ 00:04:11.232 END TEST skip_rpc_with_json 00:04:11.232 ************************************ 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.232 12:46:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.232 12:46:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:11.232 12:46:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.232 12:46:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.232 12:46:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.232 ************************************ 00:04:11.232 START TEST skip_rpc_with_delay 00:04:11.232 ************************************ 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.232 [2024-07-15 12:46:27.253025] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:11.232 [2024-07-15 12:46:27.253157] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:11.232 ************************************ 00:04:11.232 END TEST skip_rpc_with_delay 00:04:11.232 ************************************ 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:11.232 00:04:11.232 real 0m0.076s 00:04:11.232 user 0m0.049s 00:04:11.232 sys 0m0.027s 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.232 12:46:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:11.491 12:46:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.491 12:46:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:11.491 12:46:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:11.491 12:46:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:11.491 12:46:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.491 12:46:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.491 12:46:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.491 ************************************ 00:04:11.491 START TEST exit_on_failed_rpc_init 00:04:11.491 ************************************ 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59094 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59094 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59094 ']' 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.491 12:46:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.491 [2024-07-15 12:46:27.399707] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:11.491 [2024-07-15 12:46:27.399840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59094 ] 00:04:11.491 [2024-07-15 12:46:27.537639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.750 [2024-07-15 12:46:27.670834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.750 [2024-07-15 12:46:27.726516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:12.317 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:12.576 [2024-07-15 12:46:28.402615] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:12.576 [2024-07-15 12:46:28.402716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:04:12.576 [2024-07-15 12:46:28.533220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.842 [2024-07-15 12:46:28.678165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.842 [2024-07-15 12:46:28.678464] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:12.842 [2024-07-15 12:46:28.678625] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:12.842 [2024-07-15 12:46:28.678726] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59094 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59094 ']' 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59094 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59094 00:04:12.842 killing process with pid 59094 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59094' 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59094 00:04:12.842 12:46:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59094 00:04:13.140 00:04:13.140 real 0m1.861s 00:04:13.140 user 0m2.188s 00:04:13.140 sys 0m0.416s 00:04:13.141 12:46:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.141 12:46:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.141 ************************************ 00:04:13.141 END TEST exit_on_failed_rpc_init 00:04:13.141 ************************************ 00:04:13.399 12:46:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.399 12:46:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:13.399 ************************************ 00:04:13.399 END TEST skip_rpc 00:04:13.399 ************************************ 00:04:13.399 00:04:13.399 real 0m14.710s 00:04:13.399 user 0m14.202s 00:04:13.399 sys 0m1.523s 00:04:13.399 12:46:29 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.399 12:46:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.399 12:46:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.399 12:46:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.399 12:46:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.399 12:46:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.399 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.399 ************************************ 00:04:13.399 START TEST rpc_client 00:04:13.399 ************************************ 00:04:13.399 12:46:29 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.399 * Looking for test storage... 00:04:13.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:13.399 12:46:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:13.399 OK 00:04:13.399 12:46:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:13.399 ************************************ 00:04:13.399 END TEST rpc_client 00:04:13.399 ************************************ 00:04:13.399 00:04:13.399 real 0m0.111s 00:04:13.399 user 0m0.046s 00:04:13.399 sys 0m0.069s 00:04:13.399 12:46:29 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.399 12:46:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:13.399 12:46:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.399 12:46:29 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.399 12:46:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.399 12:46:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.399 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.399 ************************************ 00:04:13.399 START TEST json_config 00:04:13.399 ************************************ 00:04:13.399 12:46:29 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.658 12:46:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.658 12:46:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.658 12:46:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.658 12:46:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.658 12:46:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.658 12:46:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.658 12:46:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:13.658 12:46:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@47 -- # : 0 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:13.658 12:46:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:13.658 INFO: JSON configuration test init 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.658 12:46:29 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:13.658 12:46:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:13.658 12:46:29 json_config -- json_config/common.sh@10 -- # shift 00:04:13.658 12:46:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.658 12:46:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.658 Waiting for target to run... 00:04:13.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.658 12:46:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.658 12:46:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.658 12:46:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.658 12:46:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59230 00:04:13.658 12:46:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.658 12:46:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:13.658 12:46:29 json_config -- json_config/common.sh@25 -- # waitforlisten 59230 /var/tmp/spdk_tgt.sock 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 59230 ']' 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.658 12:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.658 [2024-07-15 12:46:29.613707] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:13.658 [2024-07-15 12:46:29.614030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59230 ] 00:04:14.223 [2024-07-15 12:46:30.039416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.223 [2024-07-15 12:46:30.141376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.789 12:46:30 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.789 12:46:30 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:14.789 12:46:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:14.789 00:04:14.789 12:46:30 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:14.789 12:46:30 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:14.789 12:46:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.789 12:46:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.789 12:46:30 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:14.789 12:46:30 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:14.789 12:46:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.789 12:46:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.789 12:46:30 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:14.789 12:46:30 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:14.789 12:46:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.048 [2024-07-15 12:46:30.914065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.306 12:46:31 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:15.306 12:46:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:15.306 12:46:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.306 12:46:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.306 12:46:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:15.306 12:46:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.306 12:46:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:15.306 12:46:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:15.306 12:46:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:15.306 12:46:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:15.565 12:46:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:15.565 12:46:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:15.565 12:46:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.565 12:46:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:15.565 12:46:31 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.565 12:46:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.823 MallocForNvmf0 00:04:15.823 12:46:31 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.823 12:46:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.080 MallocForNvmf1 00:04:16.080 12:46:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.080 12:46:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.337 [2024-07-15 12:46:32.320152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.338 12:46:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.338 12:46:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.594 12:46:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.594 12:46:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.852 12:46:32 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.852 12:46:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.416 12:46:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.416 12:46:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.416 [2024-07-15 12:46:33.408720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.416 12:46:33 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:17.416 12:46:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.416 12:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.416 12:46:33 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:17.416 12:46:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.416 12:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.723 12:46:33 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:17.723 12:46:33 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.723 12:46:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.723 MallocBdevForConfigChangeCheck 00:04:17.980 12:46:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:17.980 12:46:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.980 12:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.980 12:46:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:17.980 12:46:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.239 INFO: shutting down applications... 00:04:18.239 12:46:34 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:18.239 12:46:34 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:18.239 12:46:34 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:18.239 12:46:34 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:18.239 12:46:34 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:18.495 Calling clear_iscsi_subsystem 00:04:18.495 Calling clear_nvmf_subsystem 00:04:18.495 Calling clear_nbd_subsystem 00:04:18.495 Calling clear_ublk_subsystem 00:04:18.495 Calling clear_vhost_blk_subsystem 00:04:18.495 Calling clear_vhost_scsi_subsystem 00:04:18.495 Calling clear_bdev_subsystem 00:04:18.495 12:46:34 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:18.495 12:46:34 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:18.495 12:46:34 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:18.495 12:46:34 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:18.495 12:46:34 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.495 12:46:34 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.059 12:46:34 json_config -- json_config/json_config.sh@345 -- # break 00:04:19.059 12:46:34 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:19.059 12:46:34 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:19.059 12:46:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:19.059 12:46:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.059 12:46:34 json_config -- json_config/common.sh@35 -- # [[ -n 59230 ]] 00:04:19.059 12:46:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59230 00:04:19.059 12:46:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.059 12:46:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.059 12:46:34 json_config -- json_config/common.sh@41 -- # kill -0 59230 00:04:19.059 12:46:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.623 12:46:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.623 12:46:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.623 12:46:35 json_config -- json_config/common.sh@41 -- # kill -0 59230 00:04:19.623 12:46:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:19.623 12:46:35 json_config -- json_config/common.sh@43 -- # break 00:04:19.623 12:46:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:19.623 12:46:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:19.623 SPDK target shutdown done 00:04:19.623 INFO: relaunching applications... 00:04:19.623 12:46:35 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:19.623 12:46:35 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.623 12:46:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.623 12:46:35 json_config -- json_config/common.sh@10 -- # shift 00:04:19.623 12:46:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.623 12:46:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.623 12:46:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.623 12:46:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.623 12:46:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.623 Waiting for target to run... 00:04:19.623 12:46:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59426 00:04:19.623 12:46:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.623 12:46:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.623 12:46:35 json_config -- json_config/common.sh@25 -- # waitforlisten 59426 /var/tmp/spdk_tgt.sock 00:04:19.623 12:46:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 59426 ']' 00:04:19.623 12:46:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.623 12:46:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.623 12:46:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.623 12:46:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.623 12:46:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.623 [2024-07-15 12:46:35.470329] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:19.623 [2024-07-15 12:46:35.470459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59426 ] 00:04:19.881 [2024-07-15 12:46:35.899109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.139 [2024-07-15 12:46:36.001037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.139 [2024-07-15 12:46:36.128790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:20.397 [2024-07-15 12:46:36.345471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.397 [2024-07-15 12:46:36.377546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:20.654 00:04:20.654 INFO: Checking if target configuration is the same... 00:04:20.654 12:46:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.654 12:46:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:20.654 12:46:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:20.654 12:46:36 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:20.654 12:46:36 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:20.654 12:46:36 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.654 12:46:36 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:20.654 12:46:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.654 + '[' 2 -ne 2 ']' 00:04:20.654 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:20.654 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:20.654 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:20.654 +++ basename /dev/fd/62 00:04:20.654 ++ mktemp /tmp/62.XXX 00:04:20.654 + tmp_file_1=/tmp/62.4LH 00:04:20.654 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.654 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.654 + tmp_file_2=/tmp/spdk_tgt_config.json.SKz 00:04:20.654 + ret=0 00:04:20.654 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:20.911 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:20.911 + diff -u /tmp/62.4LH /tmp/spdk_tgt_config.json.SKz 00:04:20.911 INFO: JSON config files are the same 00:04:20.911 + echo 'INFO: JSON config files are the same' 00:04:20.911 + rm /tmp/62.4LH /tmp/spdk_tgt_config.json.SKz 00:04:20.911 + exit 0 00:04:20.911 INFO: changing configuration and checking if this can be detected... 00:04:20.911 12:46:36 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:20.911 12:46:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:20.911 12:46:36 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.911 12:46:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.170 12:46:37 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.170 12:46:37 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:21.170 12:46:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.170 + '[' 2 -ne 2 ']' 00:04:21.170 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.170 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.170 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.170 +++ basename /dev/fd/62 00:04:21.170 ++ mktemp /tmp/62.XXX 00:04:21.170 + tmp_file_1=/tmp/62.ce6 00:04:21.170 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.170 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.170 + tmp_file_2=/tmp/spdk_tgt_config.json.qC7 00:04:21.170 + ret=0 00:04:21.170 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.739 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.739 + diff -u /tmp/62.ce6 /tmp/spdk_tgt_config.json.qC7 00:04:21.739 + ret=1 00:04:21.739 + echo '=== Start of file: /tmp/62.ce6 ===' 00:04:21.739 + cat /tmp/62.ce6 00:04:21.739 + echo '=== End of file: /tmp/62.ce6 ===' 00:04:21.739 + echo '' 00:04:21.739 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qC7 ===' 00:04:21.739 + cat /tmp/spdk_tgt_config.json.qC7 00:04:21.739 + echo '=== End of file: /tmp/spdk_tgt_config.json.qC7 ===' 00:04:21.739 + echo '' 00:04:21.739 + rm /tmp/62.ce6 /tmp/spdk_tgt_config.json.qC7 00:04:21.739 + exit 1 00:04:21.739 INFO: configuration change detected. 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:21.739 12:46:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.739 12:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@317 -- # [[ -n 59426 ]] 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:21.739 12:46:37 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:21.739 12:46:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.739 12:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.740 12:46:37 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:21.740 12:46:37 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:21.740 12:46:37 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:21.740 12:46:37 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:21.740 12:46:37 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:21.740 12:46:37 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.740 12:46:37 json_config -- json_config/json_config.sh@323 -- # killprocess 59426 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@948 -- # '[' -z 59426 ']' 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@952 -- # kill -0 59426 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@953 -- # uname 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59426 00:04:21.740 killing process with pid 59426 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59426' 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@967 -- # kill 59426 00:04:21.740 12:46:37 json_config -- common/autotest_common.sh@972 -- # wait 59426 00:04:21.999 12:46:38 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.999 12:46:38 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:21.999 12:46:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.999 12:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.258 INFO: Success 00:04:22.258 12:46:38 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:22.258 12:46:38 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:22.258 00:04:22.258 real 0m8.625s 00:04:22.258 user 0m12.482s 00:04:22.258 sys 0m1.730s 00:04:22.258 ************************************ 00:04:22.258 END TEST json_config 00:04:22.258 ************************************ 00:04:22.259 12:46:38 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.259 12:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.259 12:46:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:22.259 12:46:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.259 12:46:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.259 12:46:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.259 12:46:38 -- common/autotest_common.sh@10 -- # set +x 00:04:22.259 ************************************ 00:04:22.259 START TEST json_config_extra_key 00:04:22.259 ************************************ 00:04:22.259 12:46:38 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.259 12:46:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.259 12:46:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.259 12:46:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.259 12:46:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.259 12:46:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.259 12:46:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.259 12:46:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:22.259 12:46:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:22.259 12:46:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:22.259 INFO: launching applications... 00:04:22.259 12:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.259 Waiting for target to run... 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59572 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59572 /var/tmp/spdk_tgt.sock 00:04:22.259 12:46:38 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59572 ']' 00:04:22.259 12:46:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.259 12:46:38 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.259 12:46:38 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.259 12:46:38 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.259 12:46:38 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.259 12:46:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.259 [2024-07-15 12:46:38.263890] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:22.259 [2024-07-15 12:46:38.264214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59572 ] 00:04:22.826 [2024-07-15 12:46:38.716283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.826 [2024-07-15 12:46:38.819881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.826 [2024-07-15 12:46:38.842023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.394 00:04:23.394 INFO: shutting down applications... 00:04:23.394 12:46:39 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.394 12:46:39 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:23.394 12:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.394 12:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59572 ]] 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59572 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59572 00:04:23.394 12:46:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.960 12:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.960 12:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.960 12:46:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59572 00:04:23.960 12:46:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.960 12:46:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:23.960 12:46:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.960 12:46:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.960 SPDK target shutdown done 00:04:23.960 12:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:23.960 Success 00:04:23.960 ************************************ 00:04:23.960 END TEST json_config_extra_key 00:04:23.960 ************************************ 00:04:23.960 00:04:23.960 real 0m1.660s 00:04:23.960 user 0m1.560s 00:04:23.960 sys 0m0.452s 00:04:23.960 12:46:39 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.960 12:46:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.960 12:46:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.960 12:46:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.960 12:46:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.960 12:46:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.960 12:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:23.960 ************************************ 00:04:23.960 START TEST alias_rpc 00:04:23.960 ************************************ 00:04:23.960 12:46:39 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.960 * Looking for test storage... 00:04:23.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:23.960 12:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:23.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.961 12:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59637 00:04:23.961 12:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59637 00:04:23.961 12:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.961 12:46:39 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59637 ']' 00:04:23.961 12:46:39 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.961 12:46:39 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.961 12:46:39 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.961 12:46:39 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.961 12:46:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.961 [2024-07-15 12:46:39.955994] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:23.961 [2024-07-15 12:46:39.956099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:04:24.219 [2024-07-15 12:46:40.090020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.219 [2024-07-15 12:46:40.213800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.219 [2024-07-15 12:46:40.268103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:25.154 12:46:40 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.154 12:46:40 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:25.154 12:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:25.412 12:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59637 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59637 ']' 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59637 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59637 00:04:25.412 killing process with pid 59637 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59637' 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@967 -- # kill 59637 00:04:25.412 12:46:41 alias_rpc -- common/autotest_common.sh@972 -- # wait 59637 00:04:25.670 ************************************ 00:04:25.670 END TEST alias_rpc 00:04:25.670 ************************************ 00:04:25.670 00:04:25.670 real 0m1.818s 00:04:25.670 user 0m2.103s 00:04:25.670 sys 0m0.410s 00:04:25.670 12:46:41 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.670 12:46:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.670 12:46:41 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.670 12:46:41 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:25.670 12:46:41 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:25.670 12:46:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.670 12:46:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.670 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.670 ************************************ 00:04:25.670 START TEST spdkcli_tcp 00:04:25.670 ************************************ 00:04:25.670 12:46:41 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:25.927 * Looking for test storage... 00:04:25.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:25.927 12:46:41 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.927 12:46:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59707 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59707 00:04:25.927 12:46:41 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59707 ']' 00:04:25.927 12:46:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:25.928 12:46:41 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.928 12:46:41 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.928 12:46:41 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.928 12:46:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.928 12:46:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.928 [2024-07-15 12:46:41.817353] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:25.928 [2024-07-15 12:46:41.817458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59707 ] 00:04:25.928 [2024-07-15 12:46:41.955682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.186 [2024-07-15 12:46:42.071673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.186 [2024-07-15 12:46:42.071686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.186 [2024-07-15 12:46:42.126081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:27.120 12:46:42 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.120 12:46:42 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:27.120 12:46:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59730 00:04:27.120 12:46:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:27.120 12:46:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:27.120 [ 00:04:27.120 "bdev_malloc_delete", 00:04:27.120 "bdev_malloc_create", 00:04:27.120 "bdev_null_resize", 00:04:27.120 "bdev_null_delete", 00:04:27.120 "bdev_null_create", 00:04:27.121 "bdev_nvme_cuse_unregister", 00:04:27.121 "bdev_nvme_cuse_register", 00:04:27.121 "bdev_opal_new_user", 00:04:27.121 "bdev_opal_set_lock_state", 00:04:27.121 "bdev_opal_delete", 00:04:27.121 "bdev_opal_get_info", 00:04:27.121 "bdev_opal_create", 00:04:27.121 "bdev_nvme_opal_revert", 00:04:27.121 "bdev_nvme_opal_init", 00:04:27.121 "bdev_nvme_send_cmd", 00:04:27.121 "bdev_nvme_get_path_iostat", 00:04:27.121 "bdev_nvme_get_mdns_discovery_info", 00:04:27.121 "bdev_nvme_stop_mdns_discovery", 00:04:27.121 "bdev_nvme_start_mdns_discovery", 00:04:27.121 "bdev_nvme_set_multipath_policy", 00:04:27.121 "bdev_nvme_set_preferred_path", 00:04:27.121 "bdev_nvme_get_io_paths", 00:04:27.121 "bdev_nvme_remove_error_injection", 00:04:27.121 "bdev_nvme_add_error_injection", 00:04:27.121 "bdev_nvme_get_discovery_info", 00:04:27.121 "bdev_nvme_stop_discovery", 00:04:27.121 "bdev_nvme_start_discovery", 00:04:27.121 "bdev_nvme_get_controller_health_info", 00:04:27.121 "bdev_nvme_disable_controller", 00:04:27.121 "bdev_nvme_enable_controller", 00:04:27.121 "bdev_nvme_reset_controller", 00:04:27.121 "bdev_nvme_get_transport_statistics", 00:04:27.121 "bdev_nvme_apply_firmware", 00:04:27.121 "bdev_nvme_detach_controller", 00:04:27.121 "bdev_nvme_get_controllers", 00:04:27.121 "bdev_nvme_attach_controller", 00:04:27.121 "bdev_nvme_set_hotplug", 00:04:27.121 "bdev_nvme_set_options", 00:04:27.121 "bdev_passthru_delete", 00:04:27.121 "bdev_passthru_create", 00:04:27.121 "bdev_lvol_set_parent_bdev", 00:04:27.121 "bdev_lvol_set_parent", 00:04:27.121 "bdev_lvol_check_shallow_copy", 00:04:27.121 "bdev_lvol_start_shallow_copy", 00:04:27.121 "bdev_lvol_grow_lvstore", 00:04:27.121 "bdev_lvol_get_lvols", 00:04:27.121 "bdev_lvol_get_lvstores", 00:04:27.121 "bdev_lvol_delete", 00:04:27.121 "bdev_lvol_set_read_only", 00:04:27.121 "bdev_lvol_resize", 00:04:27.121 "bdev_lvol_decouple_parent", 00:04:27.121 "bdev_lvol_inflate", 00:04:27.121 "bdev_lvol_rename", 00:04:27.121 "bdev_lvol_clone_bdev", 00:04:27.121 "bdev_lvol_clone", 00:04:27.121 "bdev_lvol_snapshot", 00:04:27.121 "bdev_lvol_create", 00:04:27.121 "bdev_lvol_delete_lvstore", 00:04:27.121 "bdev_lvol_rename_lvstore", 00:04:27.121 "bdev_lvol_create_lvstore", 00:04:27.121 "bdev_raid_set_options", 00:04:27.121 "bdev_raid_remove_base_bdev", 00:04:27.121 "bdev_raid_add_base_bdev", 00:04:27.121 "bdev_raid_delete", 00:04:27.121 "bdev_raid_create", 00:04:27.121 "bdev_raid_get_bdevs", 00:04:27.121 "bdev_error_inject_error", 00:04:27.121 "bdev_error_delete", 00:04:27.121 "bdev_error_create", 00:04:27.121 "bdev_split_delete", 00:04:27.121 "bdev_split_create", 00:04:27.121 "bdev_delay_delete", 00:04:27.121 "bdev_delay_create", 00:04:27.121 "bdev_delay_update_latency", 00:04:27.121 "bdev_zone_block_delete", 00:04:27.121 "bdev_zone_block_create", 00:04:27.121 "blobfs_create", 00:04:27.121 "blobfs_detect", 00:04:27.121 "blobfs_set_cache_size", 00:04:27.121 "bdev_aio_delete", 00:04:27.121 "bdev_aio_rescan", 00:04:27.121 "bdev_aio_create", 00:04:27.121 "bdev_ftl_set_property", 00:04:27.121 "bdev_ftl_get_properties", 00:04:27.121 "bdev_ftl_get_stats", 00:04:27.121 "bdev_ftl_unmap", 00:04:27.121 "bdev_ftl_unload", 00:04:27.121 "bdev_ftl_delete", 00:04:27.121 "bdev_ftl_load", 00:04:27.121 "bdev_ftl_create", 00:04:27.121 "bdev_virtio_attach_controller", 00:04:27.121 "bdev_virtio_scsi_get_devices", 00:04:27.121 "bdev_virtio_detach_controller", 00:04:27.121 "bdev_virtio_blk_set_hotplug", 00:04:27.121 "bdev_iscsi_delete", 00:04:27.121 "bdev_iscsi_create", 00:04:27.121 "bdev_iscsi_set_options", 00:04:27.121 "bdev_uring_delete", 00:04:27.121 "bdev_uring_rescan", 00:04:27.121 "bdev_uring_create", 00:04:27.121 "accel_error_inject_error", 00:04:27.121 "ioat_scan_accel_module", 00:04:27.121 "dsa_scan_accel_module", 00:04:27.121 "iaa_scan_accel_module", 00:04:27.121 "keyring_file_remove_key", 00:04:27.121 "keyring_file_add_key", 00:04:27.121 "keyring_linux_set_options", 00:04:27.121 "iscsi_get_histogram", 00:04:27.121 "iscsi_enable_histogram", 00:04:27.121 "iscsi_set_options", 00:04:27.121 "iscsi_get_auth_groups", 00:04:27.121 "iscsi_auth_group_remove_secret", 00:04:27.121 "iscsi_auth_group_add_secret", 00:04:27.121 "iscsi_delete_auth_group", 00:04:27.121 "iscsi_create_auth_group", 00:04:27.121 "iscsi_set_discovery_auth", 00:04:27.121 "iscsi_get_options", 00:04:27.121 "iscsi_target_node_request_logout", 00:04:27.121 "iscsi_target_node_set_redirect", 00:04:27.121 "iscsi_target_node_set_auth", 00:04:27.121 "iscsi_target_node_add_lun", 00:04:27.121 "iscsi_get_stats", 00:04:27.121 "iscsi_get_connections", 00:04:27.121 "iscsi_portal_group_set_auth", 00:04:27.121 "iscsi_start_portal_group", 00:04:27.121 "iscsi_delete_portal_group", 00:04:27.121 "iscsi_create_portal_group", 00:04:27.121 "iscsi_get_portal_groups", 00:04:27.121 "iscsi_delete_target_node", 00:04:27.121 "iscsi_target_node_remove_pg_ig_maps", 00:04:27.121 "iscsi_target_node_add_pg_ig_maps", 00:04:27.121 "iscsi_create_target_node", 00:04:27.121 "iscsi_get_target_nodes", 00:04:27.121 "iscsi_delete_initiator_group", 00:04:27.121 "iscsi_initiator_group_remove_initiators", 00:04:27.121 "iscsi_initiator_group_add_initiators", 00:04:27.121 "iscsi_create_initiator_group", 00:04:27.121 "iscsi_get_initiator_groups", 00:04:27.121 "nvmf_set_crdt", 00:04:27.121 "nvmf_set_config", 00:04:27.121 "nvmf_set_max_subsystems", 00:04:27.121 "nvmf_stop_mdns_prr", 00:04:27.121 "nvmf_publish_mdns_prr", 00:04:27.121 "nvmf_subsystem_get_listeners", 00:04:27.121 "nvmf_subsystem_get_qpairs", 00:04:27.121 "nvmf_subsystem_get_controllers", 00:04:27.121 "nvmf_get_stats", 00:04:27.121 "nvmf_get_transports", 00:04:27.121 "nvmf_create_transport", 00:04:27.121 "nvmf_get_targets", 00:04:27.121 "nvmf_delete_target", 00:04:27.121 "nvmf_create_target", 00:04:27.121 "nvmf_subsystem_allow_any_host", 00:04:27.121 "nvmf_subsystem_remove_host", 00:04:27.121 "nvmf_subsystem_add_host", 00:04:27.121 "nvmf_ns_remove_host", 00:04:27.121 "nvmf_ns_add_host", 00:04:27.121 "nvmf_subsystem_remove_ns", 00:04:27.121 "nvmf_subsystem_add_ns", 00:04:27.121 "nvmf_subsystem_listener_set_ana_state", 00:04:27.121 "nvmf_discovery_get_referrals", 00:04:27.121 "nvmf_discovery_remove_referral", 00:04:27.121 "nvmf_discovery_add_referral", 00:04:27.121 "nvmf_subsystem_remove_listener", 00:04:27.121 "nvmf_subsystem_add_listener", 00:04:27.121 "nvmf_delete_subsystem", 00:04:27.121 "nvmf_create_subsystem", 00:04:27.121 "nvmf_get_subsystems", 00:04:27.121 "env_dpdk_get_mem_stats", 00:04:27.121 "nbd_get_disks", 00:04:27.121 "nbd_stop_disk", 00:04:27.121 "nbd_start_disk", 00:04:27.121 "ublk_recover_disk", 00:04:27.121 "ublk_get_disks", 00:04:27.121 "ublk_stop_disk", 00:04:27.121 "ublk_start_disk", 00:04:27.121 "ublk_destroy_target", 00:04:27.121 "ublk_create_target", 00:04:27.121 "virtio_blk_create_transport", 00:04:27.121 "virtio_blk_get_transports", 00:04:27.121 "vhost_controller_set_coalescing", 00:04:27.121 "vhost_get_controllers", 00:04:27.121 "vhost_delete_controller", 00:04:27.121 "vhost_create_blk_controller", 00:04:27.121 "vhost_scsi_controller_remove_target", 00:04:27.121 "vhost_scsi_controller_add_target", 00:04:27.121 "vhost_start_scsi_controller", 00:04:27.121 "vhost_create_scsi_controller", 00:04:27.121 "thread_set_cpumask", 00:04:27.121 "framework_get_governor", 00:04:27.121 "framework_get_scheduler", 00:04:27.121 "framework_set_scheduler", 00:04:27.121 "framework_get_reactors", 00:04:27.121 "thread_get_io_channels", 00:04:27.121 "thread_get_pollers", 00:04:27.121 "thread_get_stats", 00:04:27.121 "framework_monitor_context_switch", 00:04:27.121 "spdk_kill_instance", 00:04:27.121 "log_enable_timestamps", 00:04:27.121 "log_get_flags", 00:04:27.121 "log_clear_flag", 00:04:27.121 "log_set_flag", 00:04:27.121 "log_get_level", 00:04:27.121 "log_set_level", 00:04:27.121 "log_get_print_level", 00:04:27.121 "log_set_print_level", 00:04:27.121 "framework_enable_cpumask_locks", 00:04:27.121 "framework_disable_cpumask_locks", 00:04:27.121 "framework_wait_init", 00:04:27.121 "framework_start_init", 00:04:27.121 "scsi_get_devices", 00:04:27.121 "bdev_get_histogram", 00:04:27.121 "bdev_enable_histogram", 00:04:27.121 "bdev_set_qos_limit", 00:04:27.121 "bdev_set_qd_sampling_period", 00:04:27.121 "bdev_get_bdevs", 00:04:27.121 "bdev_reset_iostat", 00:04:27.121 "bdev_get_iostat", 00:04:27.121 "bdev_examine", 00:04:27.121 "bdev_wait_for_examine", 00:04:27.121 "bdev_set_options", 00:04:27.121 "notify_get_notifications", 00:04:27.121 "notify_get_types", 00:04:27.121 "accel_get_stats", 00:04:27.121 "accel_set_options", 00:04:27.121 "accel_set_driver", 00:04:27.121 "accel_crypto_key_destroy", 00:04:27.121 "accel_crypto_keys_get", 00:04:27.121 "accel_crypto_key_create", 00:04:27.121 "accel_assign_opc", 00:04:27.121 "accel_get_module_info", 00:04:27.121 "accel_get_opc_assignments", 00:04:27.121 "vmd_rescan", 00:04:27.121 "vmd_remove_device", 00:04:27.121 "vmd_enable", 00:04:27.121 "sock_get_default_impl", 00:04:27.121 "sock_set_default_impl", 00:04:27.121 "sock_impl_set_options", 00:04:27.121 "sock_impl_get_options", 00:04:27.121 "iobuf_get_stats", 00:04:27.121 "iobuf_set_options", 00:04:27.121 "framework_get_pci_devices", 00:04:27.121 "framework_get_config", 00:04:27.121 "framework_get_subsystems", 00:04:27.121 "trace_get_info", 00:04:27.121 "trace_get_tpoint_group_mask", 00:04:27.121 "trace_disable_tpoint_group", 00:04:27.121 "trace_enable_tpoint_group", 00:04:27.121 "trace_clear_tpoint_mask", 00:04:27.121 "trace_set_tpoint_mask", 00:04:27.121 "keyring_get_keys", 00:04:27.121 "spdk_get_version", 00:04:27.121 "rpc_get_methods" 00:04:27.121 ] 00:04:27.121 12:46:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:27.121 12:46:43 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.122 12:46:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:27.122 12:46:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59707 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59707 ']' 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59707 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59707 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.122 killing process with pid 59707 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59707' 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59707 00:04:27.122 12:46:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59707 00:04:27.703 ************************************ 00:04:27.703 END TEST spdkcli_tcp 00:04:27.703 ************************************ 00:04:27.703 00:04:27.703 real 0m1.863s 00:04:27.703 user 0m3.541s 00:04:27.703 sys 0m0.447s 00:04:27.703 12:46:43 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.703 12:46:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.703 12:46:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.703 12:46:43 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.703 12:46:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.703 12:46:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.703 12:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.703 ************************************ 00:04:27.703 START TEST dpdk_mem_utility 00:04:27.703 ************************************ 00:04:27.703 12:46:43 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.703 * Looking for test storage... 00:04:27.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:27.703 12:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:27.703 12:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59798 00:04:27.703 12:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.703 12:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59798 00:04:27.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.703 12:46:43 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59798 ']' 00:04:27.703 12:46:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.703 12:46:43 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.703 12:46:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.703 12:46:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.703 12:46:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.703 [2024-07-15 12:46:43.722463] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:27.703 [2024-07-15 12:46:43.722546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59798 ] 00:04:27.961 [2024-07-15 12:46:43.861142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.961 [2024-07-15 12:46:43.989087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.219 [2024-07-15 12:46:44.047152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:28.785 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.785 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:28.785 12:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.785 12:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.785 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.785 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.785 { 00:04:28.785 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.785 } 00:04:28.785 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.785 12:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:28.785 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:28.785 1 heaps totaling size 814.000000 MiB 00:04:28.785 size: 814.000000 MiB heap id: 0 00:04:28.785 end heaps---------- 00:04:28.785 8 mempools totaling size 598.116089 MiB 00:04:28.785 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.785 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.785 size: 84.521057 MiB name: bdev_io_59798 00:04:28.785 size: 51.011292 MiB name: evtpool_59798 00:04:28.785 size: 50.003479 MiB name: msgpool_59798 00:04:28.785 size: 21.763794 MiB name: PDU_Pool 00:04:28.785 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.785 size: 0.026123 MiB name: Session_Pool 00:04:28.785 end mempools------- 00:04:28.785 6 memzones totaling size 4.142822 MiB 00:04:28.785 size: 1.000366 MiB name: RG_ring_0_59798 00:04:28.785 size: 1.000366 MiB name: RG_ring_1_59798 00:04:28.785 size: 1.000366 MiB name: RG_ring_4_59798 00:04:28.785 size: 1.000366 MiB name: RG_ring_5_59798 00:04:28.785 size: 0.125366 MiB name: RG_ring_2_59798 00:04:28.785 size: 0.015991 MiB name: RG_ring_3_59798 00:04:28.785 end memzones------- 00:04:28.785 12:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:29.044 heap id: 0 total size: 814.000000 MiB number of busy elements: 305 number of free elements: 15 00:04:29.044 list of free elements. size: 12.471008 MiB 00:04:29.044 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:29.044 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:29.044 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:29.044 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:29.044 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:29.044 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:29.044 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:29.044 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:29.044 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:29.044 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:04:29.044 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:29.044 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:29.044 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:29.044 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:29.044 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:29.044 list of standard malloc elements. size: 199.266418 MiB 00:04:29.044 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:29.044 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:29.044 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:29.044 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:29.044 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:29.044 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:29.044 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:29.044 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:29.044 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:29.044 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:29.044 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:29.044 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:29.044 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:29.044 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:29.044 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:29.044 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:29.044 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:29.045 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:29.046 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:29.046 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:29.046 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:29.046 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:29.046 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:29.046 list of memzone associated elements. size: 602.262573 MiB 00:04:29.046 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:29.046 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:29.046 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:29.046 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:29.046 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:29.046 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59798_0 00:04:29.046 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:29.046 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59798_0 00:04:29.046 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:29.046 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59798_0 00:04:29.046 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:29.046 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:29.046 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:29.046 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:29.046 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:29.046 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59798 00:04:29.046 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:29.046 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59798 00:04:29.046 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:29.046 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59798 00:04:29.046 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:29.046 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:29.046 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:29.046 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:29.046 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:29.046 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:29.046 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:29.046 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:29.046 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:29.046 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59798 00:04:29.046 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:29.046 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59798 00:04:29.046 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:29.046 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59798 00:04:29.046 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:29.046 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59798 00:04:29.046 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:29.046 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59798 00:04:29.046 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:29.046 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:29.046 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:29.046 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:29.046 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:29.046 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:29.046 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:29.046 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59798 00:04:29.046 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:29.046 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:29.046 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:29.046 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:29.046 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:29.046 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59798 00:04:29.046 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:29.046 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:29.046 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:29.046 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59798 00:04:29.046 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:29.046 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59798 00:04:29.046 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:29.046 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:29.047 12:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:29.047 12:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59798 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59798 ']' 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59798 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59798 00:04:29.047 killing process with pid 59798 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59798' 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59798 00:04:29.047 12:46:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59798 00:04:29.305 00:04:29.305 real 0m1.729s 00:04:29.305 user 0m1.921s 00:04:29.305 sys 0m0.423s 00:04:29.305 ************************************ 00:04:29.305 END TEST dpdk_mem_utility 00:04:29.305 ************************************ 00:04:29.305 12:46:45 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.305 12:46:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:29.305 12:46:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.305 12:46:45 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.305 12:46:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.305 12:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.305 12:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.305 ************************************ 00:04:29.305 START TEST event 00:04:29.305 ************************************ 00:04:29.305 12:46:45 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.563 * Looking for test storage... 00:04:29.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:29.563 12:46:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:29.563 12:46:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.563 12:46:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.563 12:46:45 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:29.563 12:46:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.563 12:46:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.563 ************************************ 00:04:29.563 START TEST event_perf 00:04:29.563 ************************************ 00:04:29.563 12:46:45 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.563 Running I/O for 1 seconds...[2024-07-15 12:46:45.470412] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:29.563 [2024-07-15 12:46:45.470813] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59875 ] 00:04:29.563 [2024-07-15 12:46:45.601303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.821 [2024-07-15 12:46:45.715371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.821 [2024-07-15 12:46:45.715482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.821 [2024-07-15 12:46:45.715612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.821 [2024-07-15 12:46:45.715615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.777 Running I/O for 1 seconds... 00:04:30.777 lcore 0: 192365 00:04:30.777 lcore 1: 192363 00:04:30.777 lcore 2: 192363 00:04:30.777 lcore 3: 192364 00:04:30.777 done. 00:04:30.777 00:04:30.777 real 0m1.345s 00:04:30.777 user 0m4.163s 00:04:30.777 sys 0m0.056s 00:04:30.778 12:46:46 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.778 12:46:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.778 ************************************ 00:04:30.778 END TEST event_perf 00:04:30.778 ************************************ 00:04:31.047 12:46:46 event -- common/autotest_common.sh@1142 -- # return 0 00:04:31.047 12:46:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:31.047 12:46:46 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:31.047 12:46:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.047 12:46:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.047 ************************************ 00:04:31.047 START TEST event_reactor 00:04:31.047 ************************************ 00:04:31.047 12:46:46 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:31.047 [2024-07-15 12:46:46.864437] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:31.047 [2024-07-15 12:46:46.864519] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59908 ] 00:04:31.047 [2024-07-15 12:46:46.998045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.350 [2024-07-15 12:46:47.112408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.293 test_start 00:04:32.293 oneshot 00:04:32.293 tick 100 00:04:32.293 tick 100 00:04:32.293 tick 250 00:04:32.293 tick 100 00:04:32.293 tick 100 00:04:32.293 tick 250 00:04:32.293 tick 100 00:04:32.293 tick 500 00:04:32.293 tick 100 00:04:32.293 tick 100 00:04:32.293 tick 250 00:04:32.293 tick 100 00:04:32.293 tick 100 00:04:32.293 test_end 00:04:32.293 00:04:32.293 real 0m1.342s 00:04:32.293 user 0m1.180s 00:04:32.293 sys 0m0.054s 00:04:32.293 ************************************ 00:04:32.293 END TEST event_reactor 00:04:32.293 ************************************ 00:04:32.293 12:46:48 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.293 12:46:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:32.293 12:46:48 event -- common/autotest_common.sh@1142 -- # return 0 00:04:32.293 12:46:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.293 12:46:48 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:32.293 12:46:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.293 12:46:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.293 ************************************ 00:04:32.293 START TEST event_reactor_perf 00:04:32.293 ************************************ 00:04:32.293 12:46:48 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.293 [2024-07-15 12:46:48.260490] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:32.293 [2024-07-15 12:46:48.260571] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59944 ] 00:04:32.551 [2024-07-15 12:46:48.392719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.551 [2024-07-15 12:46:48.496862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.926 test_start 00:04:33.926 test_end 00:04:33.926 Performance: 377176 events per second 00:04:33.926 00:04:33.926 real 0m1.335s 00:04:33.926 user 0m1.182s 00:04:33.926 sys 0m0.048s 00:04:33.926 12:46:49 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.926 ************************************ 00:04:33.926 END TEST event_reactor_perf 00:04:33.926 ************************************ 00:04:33.926 12:46:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.926 12:46:49 event -- common/autotest_common.sh@1142 -- # return 0 00:04:33.926 12:46:49 event -- event/event.sh@49 -- # uname -s 00:04:33.926 12:46:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.926 12:46:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.926 12:46:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.926 12:46:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.926 12:46:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.926 ************************************ 00:04:33.926 START TEST event_scheduler 00:04:33.926 ************************************ 00:04:33.926 12:46:49 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.927 * Looking for test storage... 00:04:33.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:33.927 12:46:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.927 12:46:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60005 00:04:33.927 12:46:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.927 12:46:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60005 00:04:33.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.927 12:46:49 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60005 ']' 00:04:33.927 12:46:49 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.927 12:46:49 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.927 12:46:49 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.927 12:46:49 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.927 12:46:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.927 12:46:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.927 [2024-07-15 12:46:49.759962] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:33.927 [2024-07-15 12:46:49.760058] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60005 ] 00:04:33.927 [2024-07-15 12:46:49.895582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:34.186 [2024-07-15 12:46:50.036238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.186 [2024-07-15 12:46:50.036392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.186 [2024-07-15 12:46:50.036718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:34.186 [2024-07-15 12:46:50.036862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.755 12:46:50 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.755 12:46:50 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:34.755 12:46:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:34.755 12:46:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.755 12:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.755 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.755 POWER: Cannot set governor of lcore 0 to performance 00:04:34.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.755 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.755 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.755 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:34.755 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:34.755 POWER: Unable to set Power Management Environment for lcore 0 00:04:34.755 [2024-07-15 12:46:50.772153] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:34.755 [2024-07-15 12:46:50.772166] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:34.755 [2024-07-15 12:46:50.772176] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:34.755 [2024-07-15 12:46:50.772189] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:34.755 [2024-07-15 12:46:50.772197] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:34.755 [2024-07-15 12:46:50.772204] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:34.755 12:46:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.755 12:46:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:34.755 12:46:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.755 12:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:35.015 [2024-07-15 12:46:50.831449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:35.015 [2024-07-15 12:46:50.864716] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:35.015 12:46:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.015 12:46:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:35.015 12:46:50 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.015 12:46:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.015 12:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:35.015 ************************************ 00:04:35.015 START TEST scheduler_create_thread 00:04:35.015 ************************************ 00:04:35.015 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:35.015 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:35.015 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.015 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.015 2 00:04:35.015 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 3 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 4 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 5 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 6 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 7 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 8 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 9 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 10 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.016 12:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.451 12:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.451 12:46:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:36.451 12:46:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:36.451 12:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.451 12:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.828 ************************************ 00:04:37.828 END TEST scheduler_create_thread 00:04:37.828 ************************************ 00:04:37.828 12:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.828 00:04:37.828 real 0m2.617s 00:04:37.828 user 0m0.020s 00:04:37.828 sys 0m0.004s 00:04:37.828 12:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.828 12:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:37.828 12:46:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:37.828 12:46:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60005 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60005 ']' 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60005 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60005 00:04:37.828 killing process with pid 60005 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60005' 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60005 00:04:37.828 12:46:53 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60005 00:04:38.085 [2024-07-15 12:46:53.973239] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:38.343 00:04:38.343 real 0m4.580s 00:04:38.343 user 0m8.668s 00:04:38.344 sys 0m0.359s 00:04:38.344 12:46:54 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.344 12:46:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.344 ************************************ 00:04:38.344 END TEST event_scheduler 00:04:38.344 ************************************ 00:04:38.344 12:46:54 event -- common/autotest_common.sh@1142 -- # return 0 00:04:38.344 12:46:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:38.344 12:46:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:38.344 12:46:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.344 12:46:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.344 12:46:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.344 ************************************ 00:04:38.344 START TEST app_repeat 00:04:38.344 ************************************ 00:04:38.344 12:46:54 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60105 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.344 Process app_repeat pid: 60105 00:04:38.344 spdk_app_start Round 0 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60105' 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:38.344 12:46:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60105 /var/tmp/spdk-nbd.sock 00:04:38.344 12:46:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60105 ']' 00:04:38.344 12:46:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.344 12:46:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.344 12:46:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.344 12:46:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.344 12:46:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.344 [2024-07-15 12:46:54.292968] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:38.344 [2024-07-15 12:46:54.293250] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60105 ] 00:04:38.602 [2024-07-15 12:46:54.428061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.602 [2024-07-15 12:46:54.539582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.602 [2024-07-15 12:46:54.539591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.602 [2024-07-15 12:46:54.591464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.241 12:46:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.241 12:46:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:39.241 12:46:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.499 Malloc0 00:04:39.758 12:46:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.017 Malloc1 00:04:40.017 12:46:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.017 12:46:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.276 /dev/nbd0 00:04:40.276 12:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.276 12:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.276 1+0 records in 00:04:40.276 1+0 records out 00:04:40.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271592 s, 15.1 MB/s 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:40.276 12:46:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:40.276 12:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.276 12:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.276 12:46:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.535 /dev/nbd1 00:04:40.535 12:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.535 12:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.535 1+0 records in 00:04:40.535 1+0 records out 00:04:40.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320573 s, 12.8 MB/s 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:40.535 12:46:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:40.535 12:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.535 12:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.535 12:46:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.535 12:46:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.535 12:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.793 { 00:04:40.793 "nbd_device": "/dev/nbd0", 00:04:40.793 "bdev_name": "Malloc0" 00:04:40.793 }, 00:04:40.793 { 00:04:40.793 "nbd_device": "/dev/nbd1", 00:04:40.793 "bdev_name": "Malloc1" 00:04:40.793 } 00:04:40.793 ]' 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.793 { 00:04:40.793 "nbd_device": "/dev/nbd0", 00:04:40.793 "bdev_name": "Malloc0" 00:04:40.793 }, 00:04:40.793 { 00:04:40.793 "nbd_device": "/dev/nbd1", 00:04:40.793 "bdev_name": "Malloc1" 00:04:40.793 } 00:04:40.793 ]' 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.793 /dev/nbd1' 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.793 /dev/nbd1' 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.793 12:46:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.794 256+0 records in 00:04:40.794 256+0 records out 00:04:40.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00728412 s, 144 MB/s 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.794 12:46:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.052 256+0 records in 00:04:41.052 256+0 records out 00:04:41.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024443 s, 42.9 MB/s 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.052 256+0 records in 00:04:41.052 256+0 records out 00:04:41.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241082 s, 43.5 MB/s 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.052 12:46:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.311 12:46:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.571 12:46:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.831 12:46:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.831 12:46:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.089 12:46:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.348 [2024-07-15 12:46:58.309195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.607 [2024-07-15 12:46:58.417417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.607 [2024-07-15 12:46:58.417420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.607 [2024-07-15 12:46:58.469748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.607 [2024-07-15 12:46:58.469831] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.607 [2024-07-15 12:46:58.469845] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.137 spdk_app_start Round 1 00:04:45.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.137 12:47:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.137 12:47:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.137 12:47:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60105 /var/tmp/spdk-nbd.sock 00:04:45.137 12:47:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60105 ']' 00:04:45.137 12:47:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.137 12:47:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.137 12:47:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.137 12:47:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.137 12:47:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.395 12:47:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.395 12:47:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:45.395 12:47:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.653 Malloc0 00:04:45.653 12:47:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.912 Malloc1 00:04:45.912 12:47:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.912 12:47:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.171 /dev/nbd0 00:04:46.171 12:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.171 12:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.171 1+0 records in 00:04:46.171 1+0 records out 00:04:46.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258096 s, 15.9 MB/s 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:46.171 12:47:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:46.171 12:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.171 12:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.171 12:47:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.430 /dev/nbd1 00:04:46.430 12:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.430 12:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.430 1+0 records in 00:04:46.430 1+0 records out 00:04:46.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328014 s, 12.5 MB/s 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:46.430 12:47:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:46.430 12:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.430 12:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.430 12:47:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.430 12:47:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.430 12:47:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.996 12:47:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.997 { 00:04:46.997 "nbd_device": "/dev/nbd0", 00:04:46.997 "bdev_name": "Malloc0" 00:04:46.997 }, 00:04:46.997 { 00:04:46.997 "nbd_device": "/dev/nbd1", 00:04:46.997 "bdev_name": "Malloc1" 00:04:46.997 } 00:04:46.997 ]' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.997 { 00:04:46.997 "nbd_device": "/dev/nbd0", 00:04:46.997 "bdev_name": "Malloc0" 00:04:46.997 }, 00:04:46.997 { 00:04:46.997 "nbd_device": "/dev/nbd1", 00:04:46.997 "bdev_name": "Malloc1" 00:04:46.997 } 00:04:46.997 ]' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.997 /dev/nbd1' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.997 /dev/nbd1' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.997 256+0 records in 00:04:46.997 256+0 records out 00:04:46.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446491 s, 235 MB/s 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.997 256+0 records in 00:04:46.997 256+0 records out 00:04:46.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244412 s, 42.9 MB/s 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.997 256+0 records in 00:04:46.997 256+0 records out 00:04:46.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270377 s, 38.8 MB/s 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.997 12:47:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.255 12:47:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.513 12:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.514 12:47:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.773 12:47:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.773 12:47:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.773 12:47:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.032 12:47:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.032 12:47:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.291 12:47:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.291 [2024-07-15 12:47:04.340294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.549 [2024-07-15 12:47:04.451261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.549 [2024-07-15 12:47:04.451270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.549 [2024-07-15 12:47:04.505294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:48.549 [2024-07-15 12:47:04.505407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.549 [2024-07-15 12:47:04.505433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.105 spdk_app_start Round 2 00:04:51.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.105 12:47:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.105 12:47:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:51.105 12:47:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60105 /var/tmp/spdk-nbd.sock 00:04:51.105 12:47:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60105 ']' 00:04:51.105 12:47:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.105 12:47:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.105 12:47:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.105 12:47:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.105 12:47:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.671 12:47:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.671 12:47:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:51.671 12:47:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.671 Malloc0 00:04:51.930 12:47:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.189 Malloc1 00:04:52.189 12:47:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.189 12:47:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.447 /dev/nbd0 00:04:52.447 12:47:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.447 12:47:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.447 1+0 records in 00:04:52.447 1+0 records out 00:04:52.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542902 s, 7.5 MB/s 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.447 12:47:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.447 12:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.447 12:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.448 12:47:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.706 /dev/nbd1 00:04:52.706 12:47:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.706 12:47:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.706 1+0 records in 00:04:52.706 1+0 records out 00:04:52.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030789 s, 13.3 MB/s 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.706 12:47:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.706 12:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.706 12:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.706 12:47:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.706 12:47:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.706 12:47:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.965 12:47:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.965 { 00:04:52.965 "nbd_device": "/dev/nbd0", 00:04:52.965 "bdev_name": "Malloc0" 00:04:52.965 }, 00:04:52.965 { 00:04:52.965 "nbd_device": "/dev/nbd1", 00:04:52.965 "bdev_name": "Malloc1" 00:04:52.965 } 00:04:52.965 ]' 00:04:52.965 12:47:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.965 { 00:04:52.965 "nbd_device": "/dev/nbd0", 00:04:52.965 "bdev_name": "Malloc0" 00:04:52.965 }, 00:04:52.965 { 00:04:52.965 "nbd_device": "/dev/nbd1", 00:04:52.965 "bdev_name": "Malloc1" 00:04:52.965 } 00:04:52.965 ]' 00:04:52.965 12:47:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.965 12:47:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.965 /dev/nbd1' 00:04:52.965 12:47:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.965 /dev/nbd1' 00:04:52.965 12:47:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.225 256+0 records in 00:04:53.225 256+0 records out 00:04:53.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00987884 s, 106 MB/s 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.225 256+0 records in 00:04:53.225 256+0 records out 00:04:53.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219694 s, 47.7 MB/s 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.225 256+0 records in 00:04:53.225 256+0 records out 00:04:53.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265359 s, 39.5 MB/s 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.225 12:47:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.484 12:47:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.741 12:47:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.741 12:47:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.741 12:47:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.741 12:47:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.742 12:47:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.742 12:47:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.742 12:47:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.742 12:47:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.742 12:47:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.742 12:47:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.742 12:47:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.999 12:47:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.999 12:47:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.999 12:47:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.999 12:47:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.999 12:47:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.563 12:47:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.563 [2024-07-15 12:47:10.527184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.819 [2024-07-15 12:47:10.631773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.819 [2024-07-15 12:47:10.631786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.819 [2024-07-15 12:47:10.683885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.819 [2024-07-15 12:47:10.683969] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.819 [2024-07-15 12:47:10.683984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.349 12:47:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60105 /var/tmp/spdk-nbd.sock 00:04:57.349 12:47:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60105 ']' 00:04:57.349 12:47:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.349 12:47:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.349 12:47:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.349 12:47:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.349 12:47:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:57.605 12:47:13 event.app_repeat -- event/event.sh@39 -- # killprocess 60105 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60105 ']' 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60105 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60105 00:04:57.605 killing process with pid 60105 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60105' 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60105 00:04:57.605 12:47:13 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60105 00:04:57.861 spdk_app_start is called in Round 0. 00:04:57.861 Shutdown signal received, stop current app iteration 00:04:57.861 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:04:57.861 spdk_app_start is called in Round 1. 00:04:57.861 Shutdown signal received, stop current app iteration 00:04:57.861 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:04:57.861 spdk_app_start is called in Round 2. 00:04:57.861 Shutdown signal received, stop current app iteration 00:04:57.861 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:04:57.861 spdk_app_start is called in Round 3. 00:04:57.861 Shutdown signal received, stop current app iteration 00:04:57.861 12:47:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.861 12:47:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:57.861 00:04:57.861 real 0m19.582s 00:04:57.861 user 0m44.222s 00:04:57.861 sys 0m2.937s 00:04:57.861 12:47:13 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.861 ************************************ 00:04:57.861 END TEST app_repeat 00:04:57.861 ************************************ 00:04:57.861 12:47:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.861 12:47:13 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.861 12:47:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.861 12:47:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.861 12:47:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.861 12:47:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.861 12:47:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.861 ************************************ 00:04:57.861 START TEST cpu_locks 00:04:57.861 ************************************ 00:04:57.861 12:47:13 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:58.118 * Looking for test storage... 00:04:58.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:58.118 12:47:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:58.118 12:47:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:58.118 12:47:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:58.118 12:47:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:58.118 12:47:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.118 12:47:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.118 12:47:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.118 ************************************ 00:04:58.118 START TEST default_locks 00:04:58.118 ************************************ 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60543 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60543 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60543 ']' 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.118 12:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.118 [2024-07-15 12:47:14.039605] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:58.118 [2024-07-15 12:47:14.039685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60543 ] 00:04:58.118 [2024-07-15 12:47:14.176575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.375 [2024-07-15 12:47:14.292515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.375 [2024-07-15 12:47:14.345919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.939 12:47:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.196 12:47:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:59.196 12:47:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60543 00:04:59.196 12:47:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.196 12:47:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60543 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60543 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60543 ']' 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60543 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60543 00:04:59.455 killing process with pid 60543 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60543' 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60543 00:04:59.455 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60543 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60543 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60543 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:00.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60543 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60543 ']' 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.021 ERROR: process (pid: 60543) is no longer running 00:05:00.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60543) - No such process 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.021 ************************************ 00:05:00.021 END TEST default_locks 00:05:00.021 ************************************ 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.021 00:05:00.021 real 0m1.848s 00:05:00.021 user 0m1.992s 00:05:00.021 sys 0m0.523s 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.021 12:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.021 12:47:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:00.021 12:47:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:00.021 12:47:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.021 12:47:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.021 12:47:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.021 ************************************ 00:05:00.021 START TEST default_locks_via_rpc 00:05:00.021 ************************************ 00:05:00.021 12:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:00.021 12:47:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60595 00:05:00.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60595 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60595 ']' 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.022 12:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.022 [2024-07-15 12:47:15.936744] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:00.022 [2024-07-15 12:47:15.936830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60595 ] 00:05:00.022 [2024-07-15 12:47:16.068836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.279 [2024-07-15 12:47:16.174539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.279 [2024-07-15 12:47:16.227229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60595 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60595 00:05:00.537 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.795 12:47:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60595 00:05:00.795 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60595 ']' 00:05:00.795 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60595 00:05:00.795 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:00.795 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.795 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60595 00:05:01.053 killing process with pid 60595 00:05:01.053 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.053 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.053 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60595' 00:05:01.053 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60595 00:05:01.053 12:47:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60595 00:05:01.311 00:05:01.312 real 0m1.366s 00:05:01.312 user 0m1.364s 00:05:01.312 sys 0m0.524s 00:05:01.312 12:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.312 ************************************ 00:05:01.312 END TEST default_locks_via_rpc 00:05:01.312 ************************************ 00:05:01.312 12:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.312 12:47:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:01.312 12:47:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.312 12:47:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.312 12:47:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.312 12:47:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.312 ************************************ 00:05:01.312 START TEST non_locking_app_on_locked_coremask 00:05:01.312 ************************************ 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60633 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60633 /var/tmp/spdk.sock 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60633 ']' 00:05:01.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.312 12:47:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.312 [2024-07-15 12:47:17.362917] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:01.312 [2024-07-15 12:47:17.363265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60633 ] 00:05:01.569 [2024-07-15 12:47:17.497821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.569 [2024-07-15 12:47:17.612397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.827 [2024-07-15 12:47:17.664739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60649 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60649 /var/tmp/spdk2.sock 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60649 ']' 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.392 12:47:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.392 [2024-07-15 12:47:18.384395] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:02.392 [2024-07-15 12:47:18.384694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60649 ] 00:05:02.649 [2024-07-15 12:47:18.526508] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.649 [2024-07-15 12:47:18.526564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.907 [2024-07-15 12:47:18.753786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.907 [2024-07-15 12:47:18.862427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.471 12:47:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.471 12:47:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:03.471 12:47:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60633 00:05:03.471 12:47:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60633 00:05:03.471 12:47:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60633 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60633 ']' 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60633 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60633 00:05:04.404 killing process with pid 60633 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60633' 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60633 00:05:04.404 12:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60633 00:05:04.969 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60649 00:05:04.969 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60649 ']' 00:05:04.969 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60649 00:05:04.969 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:04.969 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.969 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60649 00:05:05.228 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.228 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.228 killing process with pid 60649 00:05:05.228 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60649' 00:05:05.228 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60649 00:05:05.228 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60649 00:05:05.486 ************************************ 00:05:05.486 END TEST non_locking_app_on_locked_coremask 00:05:05.486 ************************************ 00:05:05.486 00:05:05.486 real 0m4.117s 00:05:05.486 user 0m4.644s 00:05:05.486 sys 0m1.057s 00:05:05.486 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.486 12:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.486 12:47:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:05.486 12:47:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:05.487 12:47:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.487 12:47:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.487 12:47:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.487 ************************************ 00:05:05.487 START TEST locking_app_on_unlocked_coremask 00:05:05.487 ************************************ 00:05:05.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60716 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60716 /var/tmp/spdk.sock 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60716 ']' 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.487 12:47:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.487 [2024-07-15 12:47:21.526897] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:05.487 [2024-07-15 12:47:21.526988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:05:05.745 [2024-07-15 12:47:21.666043] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.745 [2024-07-15 12:47:21.666105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.745 [2024-07-15 12:47:21.791896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.005 [2024-07-15 12:47:21.846045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60732 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60732 /var/tmp/spdk2.sock 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60732 ']' 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.570 12:47:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.570 [2024-07-15 12:47:22.609943] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:06.570 [2024-07-15 12:47:22.610272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60732 ] 00:05:06.828 [2024-07-15 12:47:22.758517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.087 [2024-07-15 12:47:23.018108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.087 [2024-07-15 12:47:23.134904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.649 12:47:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.649 12:47:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.649 12:47:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60732 00:05:07.649 12:47:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60732 00:05:07.649 12:47:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60716 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60716 ']' 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60716 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60716 00:05:08.582 killing process with pid 60716 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60716' 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60716 00:05:08.582 12:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60716 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60732 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60732 ']' 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60732 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60732 00:05:09.193 killing process with pid 60732 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60732' 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60732 00:05:09.193 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60732 00:05:09.758 00:05:09.758 real 0m4.171s 00:05:09.758 user 0m4.651s 00:05:09.758 sys 0m1.146s 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.758 ************************************ 00:05:09.758 END TEST locking_app_on_unlocked_coremask 00:05:09.758 ************************************ 00:05:09.758 12:47:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:09.758 12:47:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:09.758 12:47:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.758 12:47:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.758 12:47:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.758 ************************************ 00:05:09.758 START TEST locking_app_on_locked_coremask 00:05:09.758 ************************************ 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60804 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60804 /var/tmp/spdk.sock 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60804 ']' 00:05:09.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.758 12:47:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.758 [2024-07-15 12:47:25.747565] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:09.758 [2024-07-15 12:47:25.747702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60804 ] 00:05:10.016 [2024-07-15 12:47:25.888334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.016 [2024-07-15 12:47:26.033051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.273 [2024-07-15 12:47:26.086658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60821 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60821 /var/tmp/spdk2.sock 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60821 /var/tmp/spdk2.sock 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:10.838 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.839 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60821 /var/tmp/spdk2.sock 00:05:10.839 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60821 ']' 00:05:10.839 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.839 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.839 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.839 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.839 12:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.839 [2024-07-15 12:47:26.820666] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:10.839 [2024-07-15 12:47:26.820802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60821 ] 00:05:11.096 [2024-07-15 12:47:26.970679] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60804 has claimed it. 00:05:11.096 [2024-07-15 12:47:26.970751] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.662 ERROR: process (pid: 60821) is no longer running 00:05:11.662 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60821) - No such process 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60804 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60804 00:05:11.662 12:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60804 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60804 ']' 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60804 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60804 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.229 killing process with pid 60804 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60804' 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60804 00:05:12.229 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60804 00:05:12.487 00:05:12.487 real 0m2.762s 00:05:12.487 user 0m3.255s 00:05:12.487 sys 0m0.671s 00:05:12.487 ************************************ 00:05:12.487 END TEST locking_app_on_locked_coremask 00:05:12.487 ************************************ 00:05:12.487 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.487 12:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.487 12:47:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:12.487 12:47:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:12.487 12:47:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.487 12:47:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.487 12:47:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.487 ************************************ 00:05:12.487 START TEST locking_overlapped_coremask 00:05:12.487 ************************************ 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60866 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60866 /var/tmp/spdk.sock 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60866 ']' 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.487 12:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.745 [2024-07-15 12:47:28.553386] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:12.745 [2024-07-15 12:47:28.553512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:05:12.745 [2024-07-15 12:47:28.693003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.003 [2024-07-15 12:47:28.819328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.003 [2024-07-15 12:47:28.819404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.003 [2024-07-15 12:47:28.819407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.003 [2024-07-15 12:47:28.872489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60884 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60884 /var/tmp/spdk2.sock 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60884 /var/tmp/spdk2.sock 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60884 /var/tmp/spdk2.sock 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60884 ']' 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.569 12:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.827 [2024-07-15 12:47:29.639699] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:13.827 [2024-07-15 12:47:29.639833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60884 ] 00:05:13.827 [2024-07-15 12:47:29.791050] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60866 has claimed it. 00:05:13.827 [2024-07-15 12:47:29.791131] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.392 ERROR: process (pid: 60884) is no longer running 00:05:14.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60884) - No such process 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60866 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60866 ']' 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60866 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60866 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60866' 00:05:14.392 killing process with pid 60866 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60866 00:05:14.392 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60866 00:05:14.958 00:05:14.958 real 0m2.317s 00:05:14.958 user 0m6.390s 00:05:14.958 sys 0m0.426s 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.958 ************************************ 00:05:14.958 END TEST locking_overlapped_coremask 00:05:14.958 ************************************ 00:05:14.958 12:47:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:14.958 12:47:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:14.958 12:47:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.958 12:47:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.958 12:47:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.958 ************************************ 00:05:14.958 START TEST locking_overlapped_coremask_via_rpc 00:05:14.958 ************************************ 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:14.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60930 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60930 /var/tmp/spdk.sock 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60930 ']' 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.958 12:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.958 [2024-07-15 12:47:30.888535] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:14.958 [2024-07-15 12:47:30.888632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60930 ] 00:05:15.217 [2024-07-15 12:47:31.021646] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.217 [2024-07-15 12:47:31.021711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.217 [2024-07-15 12:47:31.138791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.217 [2024-07-15 12:47:31.138876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.217 [2024-07-15 12:47:31.138883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.217 [2024-07-15 12:47:31.192456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60948 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60948 /var/tmp/spdk2.sock 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60948 ']' 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.149 12:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.149 [2024-07-15 12:47:32.114647] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:16.149 [2024-07-15 12:47:32.115048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60948 ] 00:05:16.431 [2024-07-15 12:47:32.263932] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.431 [2024-07-15 12:47:32.263998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.689 [2024-07-15 12:47:32.503505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.689 [2024-07-15 12:47:32.503599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:16.689 [2024-07-15 12:47:32.503601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.689 [2024-07-15 12:47:32.613519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.264 [2024-07-15 12:47:33.154527] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60930 has claimed it. 00:05:17.264 request: 00:05:17.264 { 00:05:17.264 "method": "framework_enable_cpumask_locks", 00:05:17.264 "req_id": 1 00:05:17.264 } 00:05:17.264 Got JSON-RPC error response 00:05:17.264 response: 00:05:17.264 { 00:05:17.264 "code": -32603, 00:05:17.264 "message": "Failed to claim CPU core: 2" 00:05:17.264 } 00:05:17.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60930 /var/tmp/spdk.sock 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60930 ']' 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.264 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60948 /var/tmp/spdk2.sock 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60948 ']' 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.526 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.092 00:05:18.092 real 0m3.132s 00:05:18.092 user 0m1.808s 00:05:18.092 sys 0m0.240s 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.092 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.092 ************************************ 00:05:18.092 END TEST locking_overlapped_coremask_via_rpc 00:05:18.092 ************************************ 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.092 12:47:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.092 12:47:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60930 ]] 00:05:18.092 12:47:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60930 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60930 ']' 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60930 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60930 00:05:18.092 killing process with pid 60930 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60930' 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60930 00:05:18.092 12:47:34 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60930 00:05:18.657 12:47:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60948 ]] 00:05:18.657 12:47:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60948 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60948 ']' 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60948 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60948 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60948' 00:05:18.657 killing process with pid 60948 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60948 00:05:18.657 12:47:34 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60948 00:05:18.915 12:47:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.915 Process with pid 60930 is not found 00:05:18.915 Process with pid 60948 is not found 00:05:18.915 12:47:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:18.915 12:47:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60930 ]] 00:05:18.915 12:47:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60930 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60930 ']' 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60930 00:05:18.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60930) - No such process 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60930 is not found' 00:05:18.915 12:47:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60948 ]] 00:05:18.915 12:47:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60948 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60948 ']' 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60948 00:05:18.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60948) - No such process 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60948 is not found' 00:05:18.915 12:47:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.915 ************************************ 00:05:18.915 END TEST cpu_locks 00:05:18.915 ************************************ 00:05:18.915 00:05:18.915 real 0m20.955s 00:05:18.915 user 0m38.507s 00:05:18.915 sys 0m5.398s 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.915 12:47:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.915 12:47:34 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.915 ************************************ 00:05:18.915 END TEST event 00:05:18.915 ************************************ 00:05:18.915 00:05:18.915 real 0m49.525s 00:05:18.915 user 1m38.059s 00:05:18.915 sys 0m9.078s 00:05:18.915 12:47:34 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.915 12:47:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.915 12:47:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.915 12:47:34 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:18.915 12:47:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.915 12:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.915 12:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.915 ************************************ 00:05:18.915 START TEST thread 00:05:18.915 ************************************ 00:05:18.915 12:47:34 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:19.173 * Looking for test storage... 00:05:19.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:19.173 12:47:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.173 12:47:34 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:19.173 12:47:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.173 12:47:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.173 ************************************ 00:05:19.173 START TEST thread_poller_perf 00:05:19.173 ************************************ 00:05:19.173 12:47:35 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.173 [2024-07-15 12:47:35.025909] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:19.173 [2024-07-15 12:47:35.026057] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61076 ] 00:05:19.173 [2024-07-15 12:47:35.163728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.431 [2024-07-15 12:47:35.303205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.431 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:20.368 ====================================== 00:05:20.368 busy:2209798646 (cyc) 00:05:20.368 total_run_count: 314000 00:05:20.368 tsc_hz: 2200000000 (cyc) 00:05:20.368 ====================================== 00:05:20.368 poller_cost: 7037 (cyc), 3198 (nsec) 00:05:20.368 00:05:20.368 real 0m1.393s 00:05:20.368 user 0m1.221s 00:05:20.368 sys 0m0.062s 00:05:20.368 12:47:36 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.368 12:47:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.368 ************************************ 00:05:20.368 END TEST thread_poller_perf 00:05:20.368 ************************************ 00:05:20.627 12:47:36 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:20.627 12:47:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.627 12:47:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:20.627 12:47:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.627 12:47:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.627 ************************************ 00:05:20.627 START TEST thread_poller_perf 00:05:20.627 ************************************ 00:05:20.627 12:47:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.627 [2024-07-15 12:47:36.461486] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:20.627 [2024-07-15 12:47:36.461857] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61110 ] 00:05:20.627 [2024-07-15 12:47:36.599915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.886 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.886 [2024-07-15 12:47:36.718567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.821 ====================================== 00:05:21.821 busy:2201939818 (cyc) 00:05:21.821 total_run_count: 4223000 00:05:21.821 tsc_hz: 2200000000 (cyc) 00:05:21.821 ====================================== 00:05:21.821 poller_cost: 521 (cyc), 236 (nsec) 00:05:21.821 ************************************ 00:05:21.821 END TEST thread_poller_perf 00:05:21.821 ************************************ 00:05:21.821 00:05:21.821 real 0m1.368s 00:05:21.821 user 0m1.204s 00:05:21.821 sys 0m0.054s 00:05:21.821 12:47:37 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.821 12:47:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.821 12:47:37 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:21.821 12:47:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:21.821 ************************************ 00:05:21.821 END TEST thread 00:05:21.821 ************************************ 00:05:21.821 00:05:21.821 real 0m2.918s 00:05:21.821 user 0m2.480s 00:05:21.821 sys 0m0.214s 00:05:21.821 12:47:37 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.821 12:47:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.821 12:47:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.821 12:47:37 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:21.821 12:47:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.821 12:47:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.821 12:47:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.078 ************************************ 00:05:22.078 START TEST accel 00:05:22.078 ************************************ 00:05:22.078 12:47:37 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:22.078 * Looking for test storage... 00:05:22.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:22.078 12:47:37 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:22.078 12:47:37 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:22.078 12:47:37 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.078 12:47:37 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61186 00:05:22.078 12:47:37 accel -- accel/accel.sh@63 -- # waitforlisten 61186 00:05:22.078 12:47:37 accel -- common/autotest_common.sh@829 -- # '[' -z 61186 ']' 00:05:22.078 12:47:37 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.078 12:47:37 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.078 12:47:37 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:22.078 12:47:37 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:22.078 12:47:37 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.078 12:47:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.078 12:47:37 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.078 12:47:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.078 12:47:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.078 12:47:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.078 12:47:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.078 12:47:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.078 12:47:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:22.078 12:47:37 accel -- accel/accel.sh@41 -- # jq -r . 00:05:22.078 [2024-07-15 12:47:38.028743] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:22.078 [2024-07-15 12:47:38.029212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61186 ] 00:05:22.335 [2024-07-15 12:47:38.169639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.335 [2024-07-15 12:47:38.282905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.335 [2024-07-15 12:47:38.336787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.593 12:47:38 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.593 12:47:38 accel -- common/autotest_common.sh@862 -- # return 0 00:05:22.593 12:47:38 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:22.593 12:47:38 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:22.593 12:47:38 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:22.593 12:47:38 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:22.593 12:47:38 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:22.593 12:47:38 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:22.593 12:47:38 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:22.593 12:47:38 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.593 12:47:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.593 12:47:38 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.593 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.593 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.593 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.593 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.593 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.593 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.593 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.593 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.593 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.593 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.593 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.593 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.593 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.593 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.594 12:47:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.594 12:47:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.594 12:47:38 accel -- accel/accel.sh@75 -- # killprocess 61186 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@948 -- # '[' -z 61186 ']' 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@952 -- # kill -0 61186 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@953 -- # uname 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61186 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61186' 00:05:22.594 killing process with pid 61186 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@967 -- # kill 61186 00:05:22.594 12:47:38 accel -- common/autotest_common.sh@972 -- # wait 61186 00:05:23.159 12:47:39 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:23.159 12:47:39 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:23.159 12:47:39 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:23.159 12:47:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.159 12:47:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.159 12:47:39 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:23.159 12:47:39 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:23.159 12:47:39 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.159 12:47:39 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:23.159 12:47:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.159 12:47:39 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:23.159 12:47:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:23.159 12:47:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.159 12:47:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.159 ************************************ 00:05:23.159 START TEST accel_missing_filename 00:05:23.159 ************************************ 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.159 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:23.159 12:47:39 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:23.159 [2024-07-15 12:47:39.124882] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:23.159 [2024-07-15 12:47:39.125006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61230 ] 00:05:23.417 [2024-07-15 12:47:39.266852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.417 [2024-07-15 12:47:39.383632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.417 [2024-07-15 12:47:39.439695] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.675 [2024-07-15 12:47:39.516032] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:23.676 A filename is required. 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.676 00:05:23.676 real 0m0.512s 00:05:23.676 user 0m0.335s 00:05:23.676 sys 0m0.114s 00:05:23.676 ************************************ 00:05:23.676 END TEST accel_missing_filename 00:05:23.676 ************************************ 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.676 12:47:39 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:23.676 12:47:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.676 12:47:39 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.676 12:47:39 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:23.676 12:47:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.676 12:47:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.676 ************************************ 00:05:23.676 START TEST accel_compress_verify 00:05:23.676 ************************************ 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.676 12:47:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:23.676 12:47:39 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:23.676 [2024-07-15 12:47:39.676091] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:23.676 [2024-07-15 12:47:39.676222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:05:23.934 [2024-07-15 12:47:39.814586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.934 [2024-07-15 12:47:39.964339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.192 [2024-07-15 12:47:40.024254] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.192 [2024-07-15 12:47:40.102170] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:24.192 00:05:24.192 Compression does not support the verify option, aborting. 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.192 00:05:24.192 real 0m0.546s 00:05:24.192 user 0m0.366s 00:05:24.192 sys 0m0.129s 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.192 ************************************ 00:05:24.192 END TEST accel_compress_verify 00:05:24.192 12:47:40 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:24.192 ************************************ 00:05:24.192 12:47:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.192 12:47:40 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:24.192 12:47:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:24.192 12:47:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.192 12:47:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.192 ************************************ 00:05:24.192 START TEST accel_wrong_workload 00:05:24.192 ************************************ 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.192 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:24.192 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:24.193 12:47:40 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:24.451 Unsupported workload type: foobar 00:05:24.451 [2024-07-15 12:47:40.258998] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:24.451 accel_perf options: 00:05:24.451 [-h help message] 00:05:24.451 [-q queue depth per core] 00:05:24.451 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:24.451 [-T number of threads per core 00:05:24.451 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:24.451 [-t time in seconds] 00:05:24.451 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:24.451 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:24.451 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:24.451 [-l for compress/decompress workloads, name of uncompressed input file 00:05:24.451 [-S for crc32c workload, use this seed value (default 0) 00:05:24.451 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:24.451 [-f for fill workload, use this BYTE value (default 255) 00:05:24.451 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:24.451 [-y verify result if this switch is on] 00:05:24.451 [-a tasks to allocate per core (default: same value as -q)] 00:05:24.451 Can be used to spread operations across a wider range of memory. 00:05:24.451 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:24.451 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.451 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.451 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.451 00:05:24.451 real 0m0.031s 00:05:24.451 user 0m0.018s 00:05:24.451 sys 0m0.013s 00:05:24.451 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.451 12:47:40 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:24.451 ************************************ 00:05:24.451 END TEST accel_wrong_workload 00:05:24.451 ************************************ 00:05:24.451 12:47:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.451 12:47:40 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:24.451 12:47:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:24.451 12:47:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.451 12:47:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.451 ************************************ 00:05:24.451 START TEST accel_negative_buffers 00:05:24.451 ************************************ 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:24.451 12:47:40 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:24.451 -x option must be non-negative. 00:05:24.451 [2024-07-15 12:47:40.334208] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:24.451 accel_perf options: 00:05:24.451 [-h help message] 00:05:24.451 [-q queue depth per core] 00:05:24.451 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:24.451 [-T number of threads per core 00:05:24.451 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:24.451 [-t time in seconds] 00:05:24.451 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:24.451 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:24.451 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:24.451 [-l for compress/decompress workloads, name of uncompressed input file 00:05:24.451 [-S for crc32c workload, use this seed value (default 0) 00:05:24.451 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:24.451 [-f for fill workload, use this BYTE value (default 255) 00:05:24.451 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:24.451 [-y verify result if this switch is on] 00:05:24.451 [-a tasks to allocate per core (default: same value as -q)] 00:05:24.451 Can be used to spread operations across a wider range of memory. 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.451 ************************************ 00:05:24.451 END TEST accel_negative_buffers 00:05:24.451 ************************************ 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.451 00:05:24.451 real 0m0.032s 00:05:24.451 user 0m0.018s 00:05:24.451 sys 0m0.013s 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.451 12:47:40 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:24.451 12:47:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.451 12:47:40 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:24.452 12:47:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:24.452 12:47:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.452 12:47:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.452 ************************************ 00:05:24.452 START TEST accel_crc32c 00:05:24.452 ************************************ 00:05:24.452 12:47:40 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:24.452 12:47:40 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:24.452 [2024-07-15 12:47:40.407964] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:24.452 [2024-07-15 12:47:40.408089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61313 ] 00:05:24.710 [2024-07-15 12:47:40.546143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.710 [2024-07-15 12:47:40.695444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.710 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.967 12:47:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.968 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.968 12:47:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:25.901 12:47:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.901 00:05:25.901 real 0m1.545s 00:05:25.901 user 0m1.320s 00:05:25.901 sys 0m0.129s 00:05:25.901 ************************************ 00:05:25.901 END TEST accel_crc32c 00:05:25.901 ************************************ 00:05:25.901 12:47:41 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.901 12:47:41 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:25.901 12:47:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.901 12:47:41 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:25.901 12:47:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.901 12:47:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.901 12:47:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.158 ************************************ 00:05:26.158 START TEST accel_crc32c_C2 00:05:26.158 ************************************ 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.158 12:47:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:26.158 [2024-07-15 12:47:41.993221] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:26.158 [2024-07-15 12:47:41.993354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61353 ] 00:05:26.158 [2024-07-15 12:47:42.131262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.416 [2024-07-15 12:47:42.280870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.416 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.417 12:47:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.790 ************************************ 00:05:27.790 END TEST accel_crc32c_C2 00:05:27.790 ************************************ 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.790 00:05:27.790 real 0m1.545s 00:05:27.790 user 0m1.324s 00:05:27.790 sys 0m0.121s 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.790 12:47:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:27.790 12:47:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.790 12:47:43 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:27.790 12:47:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:27.790 12:47:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.790 12:47:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.790 ************************************ 00:05:27.790 START TEST accel_copy 00:05:27.790 ************************************ 00:05:27.790 12:47:43 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:27.790 12:47:43 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:27.790 [2024-07-15 12:47:43.581115] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:27.790 [2024-07-15 12:47:43.581255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:05:27.790 [2024-07-15 12:47:43.722536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.790 [2024-07-15 12:47:43.844050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.047 12:47:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.433 ************************************ 00:05:29.433 END TEST accel_copy 00:05:29.433 ************************************ 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:29.433 12:47:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.433 00:05:29.433 real 0m1.525s 00:05:29.433 user 0m1.310s 00:05:29.433 sys 0m0.117s 00:05:29.434 12:47:45 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.434 12:47:45 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:29.434 12:47:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.434 12:47:45 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.434 12:47:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:29.434 12:47:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.434 12:47:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.434 ************************************ 00:05:29.434 START TEST accel_fill 00:05:29.434 ************************************ 00:05:29.434 12:47:45 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:29.434 [2024-07-15 12:47:45.149251] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:29.434 [2024-07-15 12:47:45.149419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61422 ] 00:05:29.434 [2024-07-15 12:47:45.295662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.434 [2024-07-15 12:47:45.432468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.434 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.692 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 12:47:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.627 ************************************ 00:05:30.627 END TEST accel_fill 00:05:30.627 ************************************ 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:30.627 12:47:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.627 00:05:30.627 real 0m1.541s 00:05:30.627 user 0m1.320s 00:05:30.627 sys 0m0.123s 00:05:30.627 12:47:46 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.627 12:47:46 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:30.886 12:47:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.886 12:47:46 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:30.886 12:47:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:30.886 12:47:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.886 12:47:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.886 ************************************ 00:05:30.886 START TEST accel_copy_crc32c 00:05:30.886 ************************************ 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:30.886 12:47:46 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:30.886 [2024-07-15 12:47:46.731112] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:30.886 [2024-07-15 12:47:46.731257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61451 ] 00:05:30.886 [2024-07-15 12:47:46.869512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.145 [2024-07-15 12:47:47.024501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.145 12:47:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.567 ************************************ 00:05:32.567 END TEST accel_copy_crc32c 00:05:32.567 ************************************ 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.567 00:05:32.567 real 0m1.549s 00:05:32.567 user 0m1.331s 00:05:32.567 sys 0m0.121s 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.567 12:47:48 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:32.567 12:47:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.567 12:47:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:32.567 12:47:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:32.567 12:47:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.567 12:47:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.567 ************************************ 00:05:32.567 START TEST accel_copy_crc32c_C2 00:05:32.567 ************************************ 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:32.567 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:32.567 [2024-07-15 12:47:48.319251] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:32.567 [2024-07-15 12:47:48.319408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61491 ] 00:05:32.567 [2024-07-15 12:47:48.457244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.567 [2024-07-15 12:47:48.577800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.825 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.825 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.825 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.825 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.825 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.825 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.826 12:47:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.760 ************************************ 00:05:33.760 END TEST accel_copy_crc32c_C2 00:05:33.760 ************************************ 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.760 00:05:33.760 real 0m1.517s 00:05:33.760 user 0m1.310s 00:05:33.760 sys 0m0.115s 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.760 12:47:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:34.018 12:47:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.018 12:47:49 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:34.018 12:47:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.018 12:47:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.018 12:47:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.018 ************************************ 00:05:34.018 START TEST accel_dualcast 00:05:34.018 ************************************ 00:05:34.018 12:47:49 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:34.018 12:47:49 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:34.018 12:47:49 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:34.018 12:47:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.018 12:47:49 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:34.018 12:47:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:34.019 12:47:49 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:34.019 [2024-07-15 12:47:49.876106] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:34.019 [2024-07-15 12:47:49.876212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61520 ] 00:05:34.019 [2024-07-15 12:47:50.007261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.277 [2024-07-15 12:47:50.159110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.277 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.278 12:47:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.278 12:47:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.278 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.278 12:47:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.651 ************************************ 00:05:35.651 END TEST accel_dualcast 00:05:35.651 ************************************ 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:35.651 12:47:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.651 00:05:35.651 real 0m1.542s 00:05:35.651 user 0m1.323s 00:05:35.651 sys 0m0.121s 00:05:35.651 12:47:51 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.651 12:47:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:35.651 12:47:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.651 12:47:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:35.651 12:47:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:35.651 12:47:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.651 12:47:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.651 ************************************ 00:05:35.651 START TEST accel_compare 00:05:35.651 ************************************ 00:05:35.651 12:47:51 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:35.651 12:47:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:35.651 [2024-07-15 12:47:51.458887] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:35.651 [2024-07-15 12:47:51.459029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61560 ] 00:05:35.651 [2024-07-15 12:47:51.597764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.909 [2024-07-15 12:47:51.715783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.909 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.910 12:47:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:37.296 12:47:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.296 ************************************ 00:05:37.296 END TEST accel_compare 00:05:37.296 ************************************ 00:05:37.296 00:05:37.296 real 0m1.511s 00:05:37.296 user 0m1.286s 00:05:37.296 sys 0m0.127s 00:05:37.296 12:47:52 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.296 12:47:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:37.296 12:47:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.296 12:47:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:37.296 12:47:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.296 12:47:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.296 12:47:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.296 ************************************ 00:05:37.296 START TEST accel_xor 00:05:37.296 ************************************ 00:05:37.296 12:47:52 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:37.296 12:47:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:37.296 [2024-07-15 12:47:53.008840] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:37.296 [2024-07-15 12:47:53.008996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61589 ] 00:05:37.296 [2024-07-15 12:47:53.148641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.296 [2024-07-15 12:47:53.307404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 12:47:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.925 ************************************ 00:05:38.925 END TEST accel_xor 00:05:38.925 ************************************ 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.925 00:05:38.925 real 0m1.570s 00:05:38.925 user 0m1.337s 00:05:38.925 sys 0m0.135s 00:05:38.925 12:47:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.925 12:47:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:38.925 12:47:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.925 12:47:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:38.925 12:47:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:38.925 12:47:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.925 12:47:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.925 ************************************ 00:05:38.925 START TEST accel_xor 00:05:38.925 ************************************ 00:05:38.925 12:47:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.925 12:47:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:38.926 [2024-07-15 12:47:54.622049] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:38.926 [2024-07-15 12:47:54.622191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61629 ] 00:05:38.926 [2024-07-15 12:47:54.761810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.926 [2024-07-15 12:47:54.918647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.926 12:47:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.184 12:47:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.184 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.184 12:47:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:40.121 12:47:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.121 00:05:40.121 real 0m1.561s 00:05:40.121 user 0m1.342s 00:05:40.121 sys 0m0.120s 00:05:40.121 12:47:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.121 ************************************ 00:05:40.121 END TEST accel_xor 00:05:40.121 ************************************ 00:05:40.121 12:47:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:40.380 12:47:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.380 12:47:56 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:40.380 12:47:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:40.380 12:47:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.380 12:47:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.380 ************************************ 00:05:40.380 START TEST accel_dif_verify 00:05:40.380 ************************************ 00:05:40.380 12:47:56 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:40.380 12:47:56 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:40.380 [2024-07-15 12:47:56.223436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:40.380 [2024-07-15 12:47:56.223593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61664 ] 00:05:40.380 [2024-07-15 12:47:56.372329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.639 [2024-07-15 12:47:56.494012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.639 12:47:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:42.017 12:47:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.017 00:05:42.017 real 0m1.524s 00:05:42.017 user 0m0.015s 00:05:42.017 sys 0m0.003s 00:05:42.017 12:47:57 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.017 12:47:57 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:42.017 ************************************ 00:05:42.017 END TEST accel_dif_verify 00:05:42.017 ************************************ 00:05:42.017 12:47:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.017 12:47:57 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:42.017 12:47:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.017 12:47:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.017 12:47:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.017 ************************************ 00:05:42.017 START TEST accel_dif_generate 00:05:42.017 ************************************ 00:05:42.017 12:47:57 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:42.017 12:47:57 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:42.017 [2024-07-15 12:47:57.787979] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:42.018 [2024-07-15 12:47:57.788105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61698 ] 00:05:42.018 [2024-07-15 12:47:57.919707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.018 [2024-07-15 12:47:58.039482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.278 12:47:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:43.215 12:47:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.215 00:05:43.215 real 0m1.502s 00:05:43.215 user 0m1.290s 00:05:43.215 sys 0m0.115s 00:05:43.215 12:47:59 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.215 12:47:59 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:43.215 ************************************ 00:05:43.215 END TEST accel_dif_generate 00:05:43.215 ************************************ 00:05:43.474 12:47:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.474 12:47:59 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:43.474 12:47:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:43.474 12:47:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.474 12:47:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.474 ************************************ 00:05:43.474 START TEST accel_dif_generate_copy 00:05:43.474 ************************************ 00:05:43.474 12:47:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:43.474 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:43.474 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:43.474 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.474 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.474 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:43.474 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:43.475 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:43.475 [2024-07-15 12:47:59.328662] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:43.475 [2024-07-15 12:47:59.328750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61733 ] 00:05:43.475 [2024-07-15 12:47:59.462342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.735 [2024-07-15 12:47:59.589902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.735 12:47:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.112 00:05:45.112 real 0m1.505s 00:05:45.112 user 0m1.296s 00:05:45.112 sys 0m0.112s 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.112 12:48:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:45.112 ************************************ 00:05:45.112 END TEST accel_dif_generate_copy 00:05:45.112 ************************************ 00:05:45.112 12:48:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.112 12:48:00 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:45.112 12:48:00 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.112 12:48:00 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:45.112 12:48:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.112 12:48:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.112 ************************************ 00:05:45.112 START TEST accel_comp 00:05:45.112 ************************************ 00:05:45.112 12:48:00 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:45.112 12:48:00 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:45.112 [2024-07-15 12:48:00.880524] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:45.112 [2024-07-15 12:48:00.880661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61767 ] 00:05:45.112 [2024-07-15 12:48:01.023864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.112 [2024-07-15 12:48:01.134529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.371 12:48:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:46.317 12:48:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.317 00:05:46.317 real 0m1.500s 00:05:46.317 user 0m1.295s 00:05:46.317 sys 0m0.117s 00:05:46.317 12:48:02 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.317 ************************************ 00:05:46.317 END TEST accel_comp 00:05:46.317 12:48:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:46.317 ************************************ 00:05:46.574 12:48:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.574 12:48:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:46.574 12:48:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:46.574 12:48:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.574 12:48:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.574 ************************************ 00:05:46.574 START TEST accel_decomp 00:05:46.574 ************************************ 00:05:46.574 12:48:02 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:46.574 12:48:02 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:46.574 [2024-07-15 12:48:02.425866] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:46.574 [2024-07-15 12:48:02.425955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61802 ] 00:05:46.574 [2024-07-15 12:48:02.563828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.831 [2024-07-15 12:48:02.679116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.831 12:48:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.200 12:48:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.200 00:05:48.200 real 0m1.502s 00:05:48.200 user 0m1.296s 00:05:48.200 sys 0m0.115s 00:05:48.200 12:48:03 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.200 12:48:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:48.200 ************************************ 00:05:48.200 END TEST accel_decomp 00:05:48.200 ************************************ 00:05:48.200 12:48:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.200 12:48:03 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.200 12:48:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:48.200 12:48:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.200 12:48:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.200 ************************************ 00:05:48.200 START TEST accel_decomp_full 00:05:48.200 ************************************ 00:05:48.200 12:48:03 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.200 12:48:03 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.201 12:48:03 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.201 12:48:03 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:48.201 12:48:03 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:48.201 [2024-07-15 12:48:03.962834] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:48.201 [2024-07-15 12:48:03.962969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61836 ] 00:05:48.201 [2024-07-15 12:48:04.094989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.201 [2024-07-15 12:48:04.221486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.458 12:48:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.829 12:48:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:49.830 12:48:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.830 00:05:49.830 real 0m1.517s 00:05:49.830 user 0m1.304s 00:05:49.830 sys 0m0.119s 00:05:49.830 12:48:05 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.830 12:48:05 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:49.830 ************************************ 00:05:49.830 END TEST accel_decomp_full 00:05:49.830 ************************************ 00:05:49.830 12:48:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.830 12:48:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:49.830 12:48:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:49.830 12:48:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.830 12:48:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.830 ************************************ 00:05:49.830 START TEST accel_decomp_mcore 00:05:49.830 ************************************ 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:49.830 [2024-07-15 12:48:05.530794] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:49.830 [2024-07-15 12:48:05.530939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61873 ] 00:05:49.830 [2024-07-15 12:48:05.672085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.830 [2024-07-15 12:48:05.801701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.830 [2024-07-15 12:48:05.801824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.830 [2024-07-15 12:48:05.801882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.830 [2024-07-15 12:48:05.802217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.830 12:48:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.204 00:05:51.204 real 0m1.562s 00:05:51.204 user 0m0.011s 00:05:51.204 sys 0m0.003s 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.204 ************************************ 00:05:51.204 END TEST accel_decomp_mcore 00:05:51.204 ************************************ 00:05:51.204 12:48:07 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:51.204 12:48:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.204 12:48:07 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.204 12:48:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:51.204 12:48:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.204 12:48:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.204 ************************************ 00:05:51.204 START TEST accel_decomp_full_mcore 00:05:51.204 ************************************ 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:51.204 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:51.204 [2024-07-15 12:48:07.122914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:51.204 [2024-07-15 12:48:07.123008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61910 ] 00:05:51.204 [2024-07-15 12:48:07.260883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.462 [2024-07-15 12:48:07.416083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.462 [2024-07-15 12:48:07.416178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.462 [2024-07-15 12:48:07.416262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.462 [2024-07-15 12:48:07.416270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.462 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.463 12:48:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.837 00:05:52.837 real 0m1.582s 00:05:52.837 user 0m4.813s 00:05:52.837 sys 0m0.142s 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.837 12:48:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:52.837 ************************************ 00:05:52.837 END TEST accel_decomp_full_mcore 00:05:52.837 ************************************ 00:05:52.837 12:48:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.837 12:48:08 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:52.837 12:48:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:52.837 12:48:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.837 12:48:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.837 ************************************ 00:05:52.837 START TEST accel_decomp_mthread 00:05:52.837 ************************************ 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:52.837 12:48:08 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:52.837 [2024-07-15 12:48:08.749143] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:52.837 [2024-07-15 12:48:08.749724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61948 ] 00:05:52.837 [2024-07-15 12:48:08.884437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.095 [2024-07-15 12:48:09.005607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.095 12:48:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.474 ************************************ 00:05:54.474 END TEST accel_decomp_mthread 00:05:54.474 ************************************ 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.474 00:05:54.474 real 0m1.514s 00:05:54.474 user 0m1.310s 00:05:54.474 sys 0m0.109s 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.474 12:48:10 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:54.474 12:48:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.474 12:48:10 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.474 12:48:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:54.474 12:48:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.474 12:48:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.474 ************************************ 00:05:54.474 START TEST accel_decomp_full_mthread 00:05:54.474 ************************************ 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:54.474 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:54.475 [2024-07-15 12:48:10.301250] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:54.475 [2024-07-15 12:48:10.301355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61982 ] 00:05:54.475 [2024-07-15 12:48:10.432302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.738 [2024-07-15 12:48:10.562442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.738 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.739 12:48:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.151 00:05:56.151 real 0m1.534s 00:05:56.151 user 0m1.325s 00:05:56.151 sys 0m0.118s 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.151 12:48:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:56.151 ************************************ 00:05:56.151 END TEST accel_decomp_full_mthread 00:05:56.151 ************************************ 00:05:56.151 12:48:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.151 12:48:11 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:56.151 12:48:11 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:56.151 12:48:11 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:56.151 12:48:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.151 12:48:11 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:56.151 12:48:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.151 12:48:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.151 12:48:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.151 12:48:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.151 12:48:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.151 12:48:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.151 12:48:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:56.151 12:48:11 accel -- accel/accel.sh@41 -- # jq -r . 00:05:56.151 ************************************ 00:05:56.151 START TEST accel_dif_functional_tests 00:05:56.151 ************************************ 00:05:56.151 12:48:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:56.151 [2024-07-15 12:48:11.905799] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:56.151 [2024-07-15 12:48:11.905895] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62018 ] 00:05:56.151 [2024-07-15 12:48:12.040873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.151 [2024-07-15 12:48:12.159906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.151 [2024-07-15 12:48:12.160038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.151 [2024-07-15 12:48:12.160043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.410 [2024-07-15 12:48:12.213053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.410 00:05:56.410 00:05:56.410 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.410 http://cunit.sourceforge.net/ 00:05:56.410 00:05:56.410 00:05:56.410 Suite: accel_dif 00:05:56.410 Test: verify: DIF generated, GUARD check ...passed 00:05:56.410 Test: verify: DIF generated, APPTAG check ...passed 00:05:56.410 Test: verify: DIF generated, REFTAG check ...passed 00:05:56.410 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:48:12.250813] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:56.410 passed 00:05:56.410 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:48:12.251159] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:56.410 passed 00:05:56.410 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 12:48:12.251443] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed 00:05:56.410 Test: verify: APPTAG correct, APPTAG check ...5a 00:05:56.410 passed 00:05:56.410 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:48:12.251807] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:56.410 passed 00:05:56.410 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:56.410 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:56.410 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:56.410 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:48:12.252573] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:56.410 passed 00:05:56.410 Test: verify copy: DIF generated, GUARD check ...passed 00:05:56.410 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:56.410 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:56.410 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:48:12.253639] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:56.410 passed 00:05:56.410 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:48:12.253913] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:56.410 passed 00:05:56.410 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:48:12.254150] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:56.410 passed 00:05:56.410 Test: generate copy: DIF generated, GUARD check ...passed 00:05:56.410 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:56.410 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:56.410 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:56.410 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:56.410 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:56.410 Test: generate copy: iovecs-len validate ...[2024-07-15 12:48:12.254744] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:56.410 passed 00:05:56.410 Test: generate copy: buffer alignment validate ...passed 00:05:56.410 00:05:56.410 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.410 suites 1 1 n/a 0 0 00:05:56.410 tests 26 26 26 0 0 00:05:56.410 asserts 115 115 115 0 n/a 00:05:56.410 00:05:56.410 Elapsed time = 0.010 seconds 00:05:56.670 00:05:56.670 real 0m0.610s 00:05:56.670 user 0m0.828s 00:05:56.670 sys 0m0.150s 00:05:56.670 12:48:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.670 ************************************ 00:05:56.670 12:48:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:56.670 END TEST accel_dif_functional_tests 00:05:56.670 ************************************ 00:05:56.670 12:48:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.670 ************************************ 00:05:56.670 END TEST accel 00:05:56.670 ************************************ 00:05:56.670 00:05:56.670 real 0m34.623s 00:05:56.670 user 0m36.362s 00:05:56.670 sys 0m3.902s 00:05:56.670 12:48:12 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.670 12:48:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.670 12:48:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.670 12:48:12 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:56.670 12:48:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.670 12:48:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.670 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:05:56.670 ************************************ 00:05:56.670 START TEST accel_rpc 00:05:56.670 ************************************ 00:05:56.670 12:48:12 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:56.670 * Looking for test storage... 00:05:56.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:56.670 12:48:12 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.670 12:48:12 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62088 00:05:56.670 12:48:12 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62088 00:05:56.670 12:48:12 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:56.670 12:48:12 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62088 ']' 00:05:56.670 12:48:12 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.670 12:48:12 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.670 12:48:12 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.670 12:48:12 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.670 12:48:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.670 [2024-07-15 12:48:12.677027] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:56.670 [2024-07-15 12:48:12.677116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62088 ] 00:05:56.933 [2024-07-15 12:48:12.817356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.933 [2024-07-15 12:48:12.949390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.866 12:48:13 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.866 12:48:13 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:57.866 12:48:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:57.866 12:48:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:57.866 12:48:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:57.866 12:48:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:57.866 12:48:13 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:57.866 12:48:13 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.866 12:48:13 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.866 12:48:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.866 ************************************ 00:05:57.866 START TEST accel_assign_opcode 00:05:57.866 ************************************ 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:57.866 [2024-07-15 12:48:13.774367] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:57.866 [2024-07-15 12:48:13.782366] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.866 12:48:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:57.866 [2024-07-15 12:48:13.847097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.124 software 00:05:58.124 00:05:58.124 real 0m0.300s 00:05:58.124 user 0m0.048s 00:05:58.124 sys 0m0.005s 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.124 ************************************ 00:05:58.124 END TEST accel_assign_opcode 00:05:58.124 ************************************ 00:05:58.124 12:48:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.124 12:48:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62088 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62088 ']' 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62088 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62088 00:05:58.124 killing process with pid 62088 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62088' 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@967 -- # kill 62088 00:05:58.124 12:48:14 accel_rpc -- common/autotest_common.sh@972 -- # wait 62088 00:05:58.699 00:05:58.699 real 0m1.982s 00:05:58.699 user 0m2.170s 00:05:58.699 sys 0m0.420s 00:05:58.699 12:48:14 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.699 ************************************ 00:05:58.699 END TEST accel_rpc 00:05:58.699 ************************************ 00:05:58.699 12:48:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.699 12:48:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.699 12:48:14 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:58.699 12:48:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.699 12:48:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.699 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:05:58.699 ************************************ 00:05:58.699 START TEST app_cmdline 00:05:58.699 ************************************ 00:05:58.699 12:48:14 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:58.699 * Looking for test storage... 00:05:58.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:58.699 12:48:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:58.699 12:48:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:58.699 12:48:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62183 00:05:58.699 12:48:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62183 00:05:58.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.699 12:48:14 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62183 ']' 00:05:58.699 12:48:14 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.699 12:48:14 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.699 12:48:14 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.699 12:48:14 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.699 12:48:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.699 [2024-07-15 12:48:14.724297] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:58.699 [2024-07-15 12:48:14.725337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62183 ] 00:05:58.957 [2024-07-15 12:48:14.865075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.957 [2024-07-15 12:48:14.985188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.216 [2024-07-15 12:48:15.039651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.782 12:48:15 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.782 12:48:15 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:59.782 12:48:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:00.040 { 00:06:00.040 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:06:00.040 "fields": { 00:06:00.040 "major": 24, 00:06:00.040 "minor": 9, 00:06:00.040 "patch": 0, 00:06:00.040 "suffix": "-pre", 00:06:00.040 "commit": "2728651ee" 00:06:00.040 } 00:06:00.040 } 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:00.298 12:48:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:00.298 12:48:16 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.556 request: 00:06:00.556 { 00:06:00.556 "method": "env_dpdk_get_mem_stats", 00:06:00.556 "req_id": 1 00:06:00.556 } 00:06:00.556 Got JSON-RPC error response 00:06:00.556 response: 00:06:00.556 { 00:06:00.556 "code": -32601, 00:06:00.556 "message": "Method not found" 00:06:00.556 } 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.556 12:48:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62183 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62183 ']' 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62183 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62183 00:06:00.556 killing process with pid 62183 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62183' 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@967 -- # kill 62183 00:06:00.556 12:48:16 app_cmdline -- common/autotest_common.sh@972 -- # wait 62183 00:06:01.142 00:06:01.142 real 0m2.360s 00:06:01.142 user 0m3.035s 00:06:01.142 sys 0m0.467s 00:06:01.142 12:48:16 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.142 ************************************ 00:06:01.142 END TEST app_cmdline 00:06:01.142 ************************************ 00:06:01.142 12:48:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 12:48:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.142 12:48:16 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:01.142 12:48:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.142 12:48:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.142 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 ************************************ 00:06:01.142 START TEST version 00:06:01.142 ************************************ 00:06:01.142 12:48:16 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:01.142 * Looking for test storage... 00:06:01.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:01.142 12:48:17 version -- app/version.sh@17 -- # get_header_version major 00:06:01.142 12:48:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.142 12:48:17 version -- app/version.sh@17 -- # major=24 00:06:01.142 12:48:17 version -- app/version.sh@18 -- # get_header_version minor 00:06:01.142 12:48:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.142 12:48:17 version -- app/version.sh@18 -- # minor=9 00:06:01.142 12:48:17 version -- app/version.sh@19 -- # get_header_version patch 00:06:01.142 12:48:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.142 12:48:17 version -- app/version.sh@19 -- # patch=0 00:06:01.142 12:48:17 version -- app/version.sh@20 -- # get_header_version suffix 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.142 12:48:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.142 12:48:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.142 12:48:17 version -- app/version.sh@20 -- # suffix=-pre 00:06:01.142 12:48:17 version -- app/version.sh@22 -- # version=24.9 00:06:01.142 12:48:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:01.142 12:48:17 version -- app/version.sh@28 -- # version=24.9rc0 00:06:01.142 12:48:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:01.142 12:48:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:01.142 12:48:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:01.142 12:48:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:01.142 00:06:01.142 real 0m0.140s 00:06:01.142 user 0m0.085s 00:06:01.142 sys 0m0.084s 00:06:01.142 ************************************ 00:06:01.142 END TEST version 00:06:01.142 ************************************ 00:06:01.142 12:48:17 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.142 12:48:17 version -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 12:48:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.142 12:48:17 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:01.142 12:48:17 -- spdk/autotest.sh@198 -- # uname -s 00:06:01.142 12:48:17 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:01.142 12:48:17 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:01.142 12:48:17 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:01.142 12:48:17 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:01.142 12:48:17 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:01.142 12:48:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.142 12:48:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.142 12:48:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 ************************************ 00:06:01.142 START TEST spdk_dd 00:06:01.142 ************************************ 00:06:01.142 12:48:17 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:01.400 * Looking for test storage... 00:06:01.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:01.400 12:48:17 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.400 12:48:17 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.400 12:48:17 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.400 12:48:17 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.400 12:48:17 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.400 12:48:17 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.400 12:48:17 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.400 12:48:17 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:01.400 12:48:17 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.400 12:48:17 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.658 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.658 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.658 12:48:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:01.658 12:48:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:01.658 12:48:17 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:01.658 12:48:17 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.658 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.659 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:01.660 * spdk_dd linked to liburing 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:01.660 12:48:17 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:01.660 12:48:17 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:01.660 12:48:17 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:01.660 12:48:17 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:01.660 12:48:17 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.660 12:48:17 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.660 12:48:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:01.660 ************************************ 00:06:01.660 START TEST spdk_dd_basic_rw 00:06:01.660 ************************************ 00:06:01.660 12:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:01.919 * Looking for test storage... 00:06:01.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:01.919 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:01.920 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:01.920 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.921 12:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.178 ************************************ 00:06:02.178 START TEST dd_bs_lt_native_bs 00:06:02.178 ************************************ 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.178 12:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:02.178 { 00:06:02.178 "subsystems": [ 00:06:02.178 { 00:06:02.178 "subsystem": "bdev", 00:06:02.178 "config": [ 00:06:02.178 { 00:06:02.178 "params": { 00:06:02.178 "trtype": "pcie", 00:06:02.178 "traddr": "0000:00:10.0", 00:06:02.178 "name": "Nvme0" 00:06:02.178 }, 00:06:02.178 "method": "bdev_nvme_attach_controller" 00:06:02.178 }, 00:06:02.178 { 00:06:02.178 "method": "bdev_wait_for_examine" 00:06:02.178 } 00:06:02.178 ] 00:06:02.178 } 00:06:02.178 ] 00:06:02.178 } 00:06:02.178 [2024-07-15 12:48:18.034778] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:02.178 [2024-07-15 12:48:18.034860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62503 ] 00:06:02.178 [2024-07-15 12:48:18.174700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.435 [2024-07-15 12:48:18.305781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.435 [2024-07-15 12:48:18.365530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.435 [2024-07-15 12:48:18.475111] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:02.435 [2024-07-15 12:48:18.475185] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.692 [2024-07-15 12:48:18.595942] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:02.692 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:02.692 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.692 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:02.692 ************************************ 00:06:02.692 END TEST dd_bs_lt_native_bs 00:06:02.692 ************************************ 00:06:02.692 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:02.692 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:02.692 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.692 00:06:02.692 real 0m0.718s 00:06:02.692 user 0m0.494s 00:06:02.693 sys 0m0.171s 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.693 ************************************ 00:06:02.693 START TEST dd_rw 00:06:02.693 ************************************ 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:02.693 12:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.625 12:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:03.625 12:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:03.625 12:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.625 12:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.625 [2024-07-15 12:48:19.425215] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:03.625 [2024-07-15 12:48:19.425668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62534 ] 00:06:03.625 { 00:06:03.625 "subsystems": [ 00:06:03.625 { 00:06:03.625 "subsystem": "bdev", 00:06:03.625 "config": [ 00:06:03.625 { 00:06:03.625 "params": { 00:06:03.625 "trtype": "pcie", 00:06:03.625 "traddr": "0000:00:10.0", 00:06:03.625 "name": "Nvme0" 00:06:03.625 }, 00:06:03.625 "method": "bdev_nvme_attach_controller" 00:06:03.625 }, 00:06:03.625 { 00:06:03.625 "method": "bdev_wait_for_examine" 00:06:03.625 } 00:06:03.625 ] 00:06:03.625 } 00:06:03.625 ] 00:06:03.625 } 00:06:03.625 [2024-07-15 12:48:19.564614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.883 [2024-07-15 12:48:19.711153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.883 [2024-07-15 12:48:19.764451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.140  Copying: 60/60 [kB] (average 19 MBps) 00:06:04.140 00:06:04.140 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:04.140 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:04.140 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.140 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.140 { 00:06:04.140 "subsystems": [ 00:06:04.140 { 00:06:04.140 "subsystem": "bdev", 00:06:04.140 "config": [ 00:06:04.140 { 00:06:04.140 "params": { 00:06:04.140 "trtype": "pcie", 00:06:04.140 "traddr": "0000:00:10.0", 00:06:04.140 "name": "Nvme0" 00:06:04.140 }, 00:06:04.140 "method": "bdev_nvme_attach_controller" 00:06:04.140 }, 00:06:04.140 { 00:06:04.140 "method": "bdev_wait_for_examine" 00:06:04.140 } 00:06:04.140 ] 00:06:04.140 } 00:06:04.140 ] 00:06:04.140 } 00:06:04.140 [2024-07-15 12:48:20.150102] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:04.140 [2024-07-15 12:48:20.150205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62553 ] 00:06:04.398 [2024-07-15 12:48:20.288787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.398 [2024-07-15 12:48:20.406761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.654 [2024-07-15 12:48:20.460331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.912  Copying: 60/60 [kB] (average 29 MBps) 00:06:04.912 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.912 12:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.912 [2024-07-15 12:48:20.853238] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:04.912 [2024-07-15 12:48:20.853379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62574 ] 00:06:04.912 { 00:06:04.912 "subsystems": [ 00:06:04.912 { 00:06:04.912 "subsystem": "bdev", 00:06:04.912 "config": [ 00:06:04.912 { 00:06:04.912 "params": { 00:06:04.912 "trtype": "pcie", 00:06:04.912 "traddr": "0000:00:10.0", 00:06:04.912 "name": "Nvme0" 00:06:04.912 }, 00:06:04.912 "method": "bdev_nvme_attach_controller" 00:06:04.912 }, 00:06:04.912 { 00:06:04.912 "method": "bdev_wait_for_examine" 00:06:04.912 } 00:06:04.912 ] 00:06:04.912 } 00:06:04.912 ] 00:06:04.912 } 00:06:05.170 [2024-07-15 12:48:20.987888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.171 [2024-07-15 12:48:21.106272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.171 [2024-07-15 12:48:21.159626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.428  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:05.428 00:06:05.687 12:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:05.687 12:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:05.687 12:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:05.687 12:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:05.687 12:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:05.687 12:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:05.687 12:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.254 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:06.254 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:06.254 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.254 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.254 [2024-07-15 12:48:22.195432] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:06.254 [2024-07-15 12:48:22.195557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62593 ] 00:06:06.254 { 00:06:06.254 "subsystems": [ 00:06:06.254 { 00:06:06.254 "subsystem": "bdev", 00:06:06.254 "config": [ 00:06:06.254 { 00:06:06.254 "params": { 00:06:06.254 "trtype": "pcie", 00:06:06.255 "traddr": "0000:00:10.0", 00:06:06.255 "name": "Nvme0" 00:06:06.255 }, 00:06:06.255 "method": "bdev_nvme_attach_controller" 00:06:06.255 }, 00:06:06.255 { 00:06:06.255 "method": "bdev_wait_for_examine" 00:06:06.255 } 00:06:06.255 ] 00:06:06.255 } 00:06:06.255 ] 00:06:06.255 } 00:06:06.513 [2024-07-15 12:48:22.335288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.513 [2024-07-15 12:48:22.451665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.513 [2024-07-15 12:48:22.505575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.030  Copying: 60/60 [kB] (average 58 MBps) 00:06:07.030 00:06:07.030 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:07.030 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:07.030 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.030 12:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.030 [2024-07-15 12:48:22.901297] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:07.030 [2024-07-15 12:48:22.901427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62611 ] 00:06:07.030 { 00:06:07.030 "subsystems": [ 00:06:07.030 { 00:06:07.030 "subsystem": "bdev", 00:06:07.030 "config": [ 00:06:07.030 { 00:06:07.030 "params": { 00:06:07.030 "trtype": "pcie", 00:06:07.030 "traddr": "0000:00:10.0", 00:06:07.030 "name": "Nvme0" 00:06:07.030 }, 00:06:07.030 "method": "bdev_nvme_attach_controller" 00:06:07.030 }, 00:06:07.030 { 00:06:07.030 "method": "bdev_wait_for_examine" 00:06:07.030 } 00:06:07.030 ] 00:06:07.030 } 00:06:07.030 ] 00:06:07.030 } 00:06:07.030 [2024-07-15 12:48:23.043263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.312 [2024-07-15 12:48:23.165797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.312 [2024-07-15 12:48:23.227282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.588  Copying: 60/60 [kB] (average 58 MBps) 00:06:07.588 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.588 12:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.588 [2024-07-15 12:48:23.608273] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:07.588 [2024-07-15 12:48:23.608670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62622 ] 00:06:07.588 { 00:06:07.588 "subsystems": [ 00:06:07.588 { 00:06:07.588 "subsystem": "bdev", 00:06:07.588 "config": [ 00:06:07.588 { 00:06:07.588 "params": { 00:06:07.588 "trtype": "pcie", 00:06:07.588 "traddr": "0000:00:10.0", 00:06:07.588 "name": "Nvme0" 00:06:07.588 }, 00:06:07.588 "method": "bdev_nvme_attach_controller" 00:06:07.588 }, 00:06:07.588 { 00:06:07.588 "method": "bdev_wait_for_examine" 00:06:07.588 } 00:06:07.588 ] 00:06:07.588 } 00:06:07.588 ] 00:06:07.588 } 00:06:07.845 [2024-07-15 12:48:23.751391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.845 [2024-07-15 12:48:23.864895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.102 [2024-07-15 12:48:23.918139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.361  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:08.361 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:08.361 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.926 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:08.926 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:08.926 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.926 12:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.926 { 00:06:08.926 "subsystems": [ 00:06:08.926 { 00:06:08.926 "subsystem": "bdev", 00:06:08.926 "config": [ 00:06:08.926 { 00:06:08.927 "params": { 00:06:08.927 "trtype": "pcie", 00:06:08.927 "traddr": "0000:00:10.0", 00:06:08.927 "name": "Nvme0" 00:06:08.927 }, 00:06:08.927 "method": "bdev_nvme_attach_controller" 00:06:08.927 }, 00:06:08.927 { 00:06:08.927 "method": "bdev_wait_for_examine" 00:06:08.927 } 00:06:08.927 ] 00:06:08.927 } 00:06:08.927 ] 00:06:08.927 } 00:06:08.927 [2024-07-15 12:48:24.904107] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:08.927 [2024-07-15 12:48:24.904240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62647 ] 00:06:09.185 [2024-07-15 12:48:25.038109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.185 [2024-07-15 12:48:25.156305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.185 [2024-07-15 12:48:25.211047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.700  Copying: 56/56 [kB] (average 27 MBps) 00:06:09.700 00:06:09.700 12:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:09.700 12:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:09.700 12:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.700 12:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.700 [2024-07-15 12:48:25.590037] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:09.700 [2024-07-15 12:48:25.590132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62660 ] 00:06:09.700 { 00:06:09.700 "subsystems": [ 00:06:09.700 { 00:06:09.700 "subsystem": "bdev", 00:06:09.700 "config": [ 00:06:09.700 { 00:06:09.700 "params": { 00:06:09.700 "trtype": "pcie", 00:06:09.700 "traddr": "0000:00:10.0", 00:06:09.700 "name": "Nvme0" 00:06:09.700 }, 00:06:09.700 "method": "bdev_nvme_attach_controller" 00:06:09.700 }, 00:06:09.700 { 00:06:09.700 "method": "bdev_wait_for_examine" 00:06:09.700 } 00:06:09.700 ] 00:06:09.700 } 00:06:09.700 ] 00:06:09.700 } 00:06:09.700 [2024-07-15 12:48:25.726043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.957 [2024-07-15 12:48:25.854072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.957 [2024-07-15 12:48:25.912447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.215  Copying: 56/56 [kB] (average 27 MBps) 00:06:10.215 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.215 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.557 [2024-07-15 12:48:26.296202] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:10.557 [2024-07-15 12:48:26.296296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62681 ] 00:06:10.557 { 00:06:10.557 "subsystems": [ 00:06:10.557 { 00:06:10.557 "subsystem": "bdev", 00:06:10.557 "config": [ 00:06:10.557 { 00:06:10.557 "params": { 00:06:10.557 "trtype": "pcie", 00:06:10.557 "traddr": "0000:00:10.0", 00:06:10.557 "name": "Nvme0" 00:06:10.557 }, 00:06:10.557 "method": "bdev_nvme_attach_controller" 00:06:10.557 }, 00:06:10.557 { 00:06:10.557 "method": "bdev_wait_for_examine" 00:06:10.557 } 00:06:10.557 ] 00:06:10.557 } 00:06:10.557 ] 00:06:10.557 } 00:06:10.557 [2024-07-15 12:48:26.431654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.557 [2024-07-15 12:48:26.548543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.841 [2024-07-15 12:48:26.603165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.099  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:11.099 00:06:11.099 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:11.099 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:11.099 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:11.099 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:11.099 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:11.099 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:11.100 12:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.668 12:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:11.668 12:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:11.668 12:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.668 12:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.668 [2024-07-15 12:48:27.539313] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:11.668 [2024-07-15 12:48:27.539717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62700 ] 00:06:11.668 { 00:06:11.668 "subsystems": [ 00:06:11.668 { 00:06:11.668 "subsystem": "bdev", 00:06:11.668 "config": [ 00:06:11.668 { 00:06:11.668 "params": { 00:06:11.668 "trtype": "pcie", 00:06:11.668 "traddr": "0000:00:10.0", 00:06:11.668 "name": "Nvme0" 00:06:11.668 }, 00:06:11.668 "method": "bdev_nvme_attach_controller" 00:06:11.668 }, 00:06:11.668 { 00:06:11.669 "method": "bdev_wait_for_examine" 00:06:11.669 } 00:06:11.669 ] 00:06:11.669 } 00:06:11.669 ] 00:06:11.669 } 00:06:11.669 [2024-07-15 12:48:27.674609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.927 [2024-07-15 12:48:27.793917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.927 [2024-07-15 12:48:27.849062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.186  Copying: 56/56 [kB] (average 54 MBps) 00:06:12.186 00:06:12.186 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:12.186 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:12.186 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.186 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.444 [2024-07-15 12:48:28.249531] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:12.444 [2024-07-15 12:48:28.249631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62719 ] 00:06:12.444 { 00:06:12.444 "subsystems": [ 00:06:12.444 { 00:06:12.444 "subsystem": "bdev", 00:06:12.444 "config": [ 00:06:12.444 { 00:06:12.444 "params": { 00:06:12.444 "trtype": "pcie", 00:06:12.444 "traddr": "0000:00:10.0", 00:06:12.444 "name": "Nvme0" 00:06:12.444 }, 00:06:12.444 "method": "bdev_nvme_attach_controller" 00:06:12.444 }, 00:06:12.444 { 00:06:12.444 "method": "bdev_wait_for_examine" 00:06:12.444 } 00:06:12.444 ] 00:06:12.444 } 00:06:12.444 ] 00:06:12.444 } 00:06:12.444 [2024-07-15 12:48:28.380530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.703 [2024-07-15 12:48:28.530994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.703 [2024-07-15 12:48:28.586305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.961  Copying: 56/56 [kB] (average 54 MBps) 00:06:12.961 00:06:12.961 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.961 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:12.961 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:12.961 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:12.961 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:12.961 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:12.961 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:12.962 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:12.962 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:12.962 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.962 12:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 [2024-07-15 12:48:28.987609] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:12.962 [2024-07-15 12:48:28.987747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62735 ] 00:06:12.962 { 00:06:12.962 "subsystems": [ 00:06:12.962 { 00:06:12.962 "subsystem": "bdev", 00:06:12.962 "config": [ 00:06:12.962 { 00:06:12.962 "params": { 00:06:12.962 "trtype": "pcie", 00:06:12.962 "traddr": "0000:00:10.0", 00:06:12.962 "name": "Nvme0" 00:06:12.962 }, 00:06:12.962 "method": "bdev_nvme_attach_controller" 00:06:12.962 }, 00:06:12.962 { 00:06:12.962 "method": "bdev_wait_for_examine" 00:06:12.962 } 00:06:12.962 ] 00:06:12.962 } 00:06:12.962 ] 00:06:12.962 } 00:06:13.220 [2024-07-15 12:48:29.125073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.478 [2024-07-15 12:48:29.281004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.478 [2024-07-15 12:48:29.345467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.737  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:13.737 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:13.737 12:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.302 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:14.302 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:14.302 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.302 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.302 [2024-07-15 12:48:30.248950] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:14.302 [2024-07-15 12:48:30.249041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62755 ] 00:06:14.302 { 00:06:14.302 "subsystems": [ 00:06:14.302 { 00:06:14.302 "subsystem": "bdev", 00:06:14.302 "config": [ 00:06:14.302 { 00:06:14.302 "params": { 00:06:14.302 "trtype": "pcie", 00:06:14.302 "traddr": "0000:00:10.0", 00:06:14.302 "name": "Nvme0" 00:06:14.302 }, 00:06:14.302 "method": "bdev_nvme_attach_controller" 00:06:14.302 }, 00:06:14.302 { 00:06:14.302 "method": "bdev_wait_for_examine" 00:06:14.302 } 00:06:14.302 ] 00:06:14.302 } 00:06:14.302 ] 00:06:14.302 } 00:06:14.559 [2024-07-15 12:48:30.380154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.559 [2024-07-15 12:48:30.496536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.559 [2024-07-15 12:48:30.551587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.075  Copying: 48/48 [kB] (average 46 MBps) 00:06:15.075 00:06:15.075 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:15.075 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:15.075 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.075 12:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.075 [2024-07-15 12:48:30.933940] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:15.075 [2024-07-15 12:48:30.934042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62774 ] 00:06:15.075 { 00:06:15.075 "subsystems": [ 00:06:15.075 { 00:06:15.075 "subsystem": "bdev", 00:06:15.075 "config": [ 00:06:15.075 { 00:06:15.075 "params": { 00:06:15.075 "trtype": "pcie", 00:06:15.075 "traddr": "0000:00:10.0", 00:06:15.075 "name": "Nvme0" 00:06:15.075 }, 00:06:15.075 "method": "bdev_nvme_attach_controller" 00:06:15.075 }, 00:06:15.075 { 00:06:15.075 "method": "bdev_wait_for_examine" 00:06:15.075 } 00:06:15.075 ] 00:06:15.075 } 00:06:15.075 ] 00:06:15.076 } 00:06:15.076 [2024-07-15 12:48:31.067821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.336 [2024-07-15 12:48:31.183820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.336 [2024-07-15 12:48:31.238949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.594  Copying: 48/48 [kB] (average 46 MBps) 00:06:15.594 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.594 12:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.594 { 00:06:15.594 "subsystems": [ 00:06:15.594 { 00:06:15.594 "subsystem": "bdev", 00:06:15.594 "config": [ 00:06:15.594 { 00:06:15.594 "params": { 00:06:15.594 "trtype": "pcie", 00:06:15.594 "traddr": "0000:00:10.0", 00:06:15.594 "name": "Nvme0" 00:06:15.594 }, 00:06:15.594 "method": "bdev_nvme_attach_controller" 00:06:15.594 }, 00:06:15.594 { 00:06:15.594 "method": "bdev_wait_for_examine" 00:06:15.594 } 00:06:15.594 ] 00:06:15.594 } 00:06:15.594 ] 00:06:15.594 } 00:06:15.594 [2024-07-15 12:48:31.647513] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:15.594 [2024-07-15 12:48:31.647644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62789 ] 00:06:15.852 [2024-07-15 12:48:31.785024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.852 [2024-07-15 12:48:31.899355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.110 [2024-07-15 12:48:31.952087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.368  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:16.368 00:06:16.368 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:16.368 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:16.368 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:16.368 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:16.368 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:16.368 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:16.368 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.935 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:16.935 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:16.935 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.935 12:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.935 { 00:06:16.935 "subsystems": [ 00:06:16.935 { 00:06:16.935 "subsystem": "bdev", 00:06:16.935 "config": [ 00:06:16.935 { 00:06:16.935 "params": { 00:06:16.935 "trtype": "pcie", 00:06:16.935 "traddr": "0000:00:10.0", 00:06:16.935 "name": "Nvme0" 00:06:16.935 }, 00:06:16.935 "method": "bdev_nvme_attach_controller" 00:06:16.935 }, 00:06:16.935 { 00:06:16.935 "method": "bdev_wait_for_examine" 00:06:16.935 } 00:06:16.935 ] 00:06:16.935 } 00:06:16.935 ] 00:06:16.935 } 00:06:16.935 [2024-07-15 12:48:32.810905] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.935 [2024-07-15 12:48:32.810987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62808 ] 00:06:16.935 [2024-07-15 12:48:32.949648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.193 [2024-07-15 12:48:33.052946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.193 [2024-07-15 12:48:33.107365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.450  Copying: 48/48 [kB] (average 46 MBps) 00:06:17.450 00:06:17.450 12:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:17.450 12:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:17.450 12:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.450 12:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.450 [2024-07-15 12:48:33.475909] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:17.450 [2024-07-15 12:48:33.475992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62826 ] 00:06:17.450 { 00:06:17.450 "subsystems": [ 00:06:17.450 { 00:06:17.450 "subsystem": "bdev", 00:06:17.450 "config": [ 00:06:17.450 { 00:06:17.450 "params": { 00:06:17.450 "trtype": "pcie", 00:06:17.450 "traddr": "0000:00:10.0", 00:06:17.450 "name": "Nvme0" 00:06:17.450 }, 00:06:17.450 "method": "bdev_nvme_attach_controller" 00:06:17.450 }, 00:06:17.450 { 00:06:17.450 "method": "bdev_wait_for_examine" 00:06:17.450 } 00:06:17.450 ] 00:06:17.450 } 00:06:17.450 ] 00:06:17.450 } 00:06:17.707 [2024-07-15 12:48:33.606791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.707 [2024-07-15 12:48:33.720831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.965 [2024-07-15 12:48:33.774625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.223  Copying: 48/48 [kB] (average 46 MBps) 00:06:18.223 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.223 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.223 { 00:06:18.223 "subsystems": [ 00:06:18.223 { 00:06:18.223 "subsystem": "bdev", 00:06:18.223 "config": [ 00:06:18.223 { 00:06:18.223 "params": { 00:06:18.223 "trtype": "pcie", 00:06:18.223 "traddr": "0000:00:10.0", 00:06:18.223 "name": "Nvme0" 00:06:18.223 }, 00:06:18.223 "method": "bdev_nvme_attach_controller" 00:06:18.223 }, 00:06:18.223 { 00:06:18.223 "method": "bdev_wait_for_examine" 00:06:18.223 } 00:06:18.223 ] 00:06:18.223 } 00:06:18.223 ] 00:06:18.223 } 00:06:18.223 [2024-07-15 12:48:34.160375] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:18.223 [2024-07-15 12:48:34.160473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62843 ] 00:06:18.485 [2024-07-15 12:48:34.296627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.485 [2024-07-15 12:48:34.411350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.485 [2024-07-15 12:48:34.466330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.751  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:18.751 00:06:18.751 00:06:18.751 real 0m16.045s 00:06:18.751 user 0m12.017s 00:06:18.751 sys 0m5.396s 00:06:18.751 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.751 ************************************ 00:06:18.751 END TEST dd_rw 00:06:18.751 ************************************ 00:06:18.751 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 ************************************ 00:06:19.009 START TEST dd_rw_offset 00:06:19.009 ************************************ 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:19.009 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=o8h7hazvvmcdgwovybmblefog03sta7jea537nbpiaomnn9t3pu9xl8bui18k66dgbz348hoikyi7mrwc0vomcvmcaowetgbfdllvfdyv25hleq22iexcriulhzdaqi7z8gffvc7gjuid4bm9ck18egrilyc45qt2x8vql32fd9azdovhc72u35cgrg6t4kdsa49699jx3jv82spieedvhtjs2vy4zfopit7btcikqk9ejmw422g6lngrzvtrm2od20jmpp1yz7kv5jb6q0jfrbuidqy0l4xvrmu1tbm2rd6k9m8tbwq25ahy4g6w6r66p7jm1o8xcx6yv5rbii894otv3xg2kn4uofwpqbhxv5v90gsxltu8xkun4bqm679zij49xgbj36w12xwtfnkomwg4xuuwihvfxoqr3xmbgn9wju5rpljbpajikcd0n5k881sl8hdvwksr0rcjybnpz1bpc3xxa80vbgk5rpvhm5s3798k4m4xno1440ijh327pecb81b78f9wkxmn768j0xd3pictpjknrv7mc91gqgmpx2cisyupgcdm1u0x1wk9zj3g1jjdfpwd9pqeyksyyijoygdg7upt5r4gnx7xmfvivbv54eme6qi6o54ehp8f41ptgvut4rn0uar9yu1iwuhovnvs1vdvi7d2j1sieqzsj95biowr09qnlh4vt3vkuy5ecmqmt8qiydfu6t4tl7yya4pnwr5bppcdc6ufsg96we1pgj7v72pl6i724uowf2za76ztu3bnw0uvn3x2q0fkddjzcmxxxohplc28iuxy68ep5t8n6wh948wqhgn7gp47vwk90fypwr8xxj80ofmopn9tyb0hr6gm1rlsdncyucybziz5fr5sas8wc70uq1iiw7n0qg6dfd8ajl533e8xpc42hrbzegn6atuplt9uo4fphb2nqpxwjptu40w4zlwmojttp2nsjhdkc5sxcj1z2g8si44gzt3tumvducuh8b7qft9oabzikrdblttg2os76s3vj19wzp65rlvr0ikhaohbuop2fpi5q9c4037awtu8hm58asoaycqx6drxbve0t8438p6wkjt9m7cy2f7mlsiixnrb3e70p1cqei9e7bv8dx4h8rnhavg5wrzmix1xdy5f2tg4jmrbc9htbgk1ptrui18f2vn5ouy8goui54pajr0dui8p31m7qgfm5cdw54d12b8dxccn0iuraxntyalh5gguas7ckz6k11d0mjslk7gz9rvku54vk6zl2r042zwixr261265zvb5dv26npltf9grqm7t4m0r51es10dxb67ul7xitz5tijdy51hqt9bbziihg4ju94d6swdkza3bzzb3vk8w78gqb2t24xj44ngwvdyydfebovtfv8cvs9qlyga5wy0fopf5htblec65awaric88c2f8stzf9km2xgb2ifwmrbz7lbowdvoetqc586972e96ykqoblib4qlqdzxespnj27i3s7ajvx4xfomk03unhekvj7nd2inobmrgd3ndwjj6bi72bj2i7ady0s3dpkel841l77n0vjqlyo6q7uff09gbqgir5bct74sn88ue4q8uz65qfuoqb4l8l8i4zc5aqkg8trmyi9ijhnvgkg93uw16hd6vtjx7gkcp5nrtkwyrvo8deoinngu9vlyo2z5vfm1zwiph6buk742xr4cbvyx75rq5yfi4thdyga6teycv3bw1q4ovypr5q2eh47x4lnmagkvm1wtm3n7qa9jmzmag0fxa2e1e2q083qssvim6nelol15i42gd5z2dv20oeljc6vo2gbrem0f2llyewnmjwbnmfw2mm1jd32v88xeoyjbblirv3hdahhrzgvcit0zvj9e2tjolsf950yuuzx1n7kolf74eqjxlfe7qzz2rmg7wgmd2eyo5f6ybjqo1os0ydlw9rz2qotvgmg0zo4986zug8ueczvtoa1wbbvymq5yilhdmmh6d25mkvg0q8fm4up3a604p4nzawsmvwbvc46x8spjz6z3gtj42052p1qal0rbbrzunib3qw663q8oty0bmu18lx4jewoq3wzhmenwxdyetf7jtv1ubp2u17do08xxnh4wfzdnw6c41ope84nvhrlvkm5eni3ae5hzgf2jn24aaya42zs9ijwji17nbkl91flle93cnitm2vs36q0y1vdt1inlj5mji2tojdwqvahg7xrosmrap8oftroulr3zv23osdutxtq2w1mj4128967k8g5636qksc2jhivlxkijkwkdkduf0b57mnj5b8onbctn4iw012wrtrivv8cr8s4nuagap39cqlty3mknip7wlbkunn2t5i2uvpvfavjohsc3i1b7yr7le8dsa80u25h5j6kzp7fza8pyzqplxq8o5dko5mscnmt22y9bt5ghzomvsuqp5cudd4ic2jg34y6i6sgdg2egxm24wv3rifqof0qsd35zwcn49adh96cxbyhoxw2evjuyz5sfpx0c1sq3vz6kh5v1jx82c9r7fohd7uoriycqaexyoj7fho4dq5xk1orbyt59bx4o4z2hy4hs3gan8ej2gxuiszagzjh45ak705ad0mfkhsai7ug7njpzaj6ljyu3hfphqik7msxjo4rzidynf6peopetpm45b2ps5qwwxls7vuufmcxk5tcqhmcnli2fayohj7y1hnwwanmcu518ngkppfxhpj30po1vgxx0coajhix2le8ak9vnvvpyfm2lw1ub8hxw15qqsz8rhna1y9ma5qzev3pa5cewqa89q60j9ckhe90bcghfgbj29xwq5n1a7hjx37cucqfgp6o3dvjglkzuj7enrhycshpuewz7yyd5uzlrcfm5imznlqiwpltt7i6b8cc9h4ye9l1adfhr507feepwo72um9612hlmnkxecwm5i239j5bgdu7ipxaliafqiibgnemmdzw2ueau9fdd1t6nfaf4a5iiaknrtofkhb15b2fqx76ke5tu3r41ck2qflczodepvvp24r489j0innfqqpmyaz0fqlakyser8qw0r5s0ss5bcm9znim1oa1e7okxwqtuza9oa19w8i5w4l8ybwdhh5luskraunqlg840mmwoznd6dottwyyegacrryz26w2jfc3scyajryohro5cxcieijete2gut1qhuv0mtesili5e9vsicpkeg6hpg1qpi2lzeckmiu27w2sxrdzmehb8k9anxdh08eac95ky15t5ckrulvm414jt6guwsmft8h2a8ezb5751e8j2d0q0dpmk8mlwnpzvm6cvjpw0yfa7tzvznsg0439hyyqzo7z6jxebvr2l7m6gipvodc6qul9ar8j05w1q9au1qk0h5hy8h7p4simhjvne6966ho3whsxz4rdo0rvu7e15c6aw8jzn9b6jin4e72nko2o3zlog64k64ztbgilc1ondz2maowuibiw4qolybsnhqfgd444cxna4iblj75tp67fjltcf8mx1ffflxrbjcmw1h7m2ft8n7qq9ughd24b84xz9v0yzz2kou0vlgn0i0c1cukzq5kh5zrh9kayyf9te9866zje66sb7c7sfs3wdn8k6b77j5sclpvzpfojpsbyhe6b4iotkjbxm10df1hecsj3z952thkw43qtrdd88x3bzl2d7i899c1wcnymir2v4p7qg4yhyjezc9vr3b1d3zlib41xafsbnzfp2n3r46u0opzc76u30cfy477uswxtcdrc1n368dl7y7fst98yn1p93sn7fm3ha0lrlxoh9h9a8jpa6mrfgd58m9o0fznkadjabykqxr993cvd7985cwb321y0gh75zogk2qxg5r1cq43n0o4tqb7zcy806leb5121qon4mwdo6vog4ghklyxhl2zgbdg8nv9mptosgu7z16g6qj7o35kf6rna1gks27p6uq24azxz94fsr0zvf3coq9wa21m8gfeedozxml772mm0uz8q8lmj0xvqny3f0sm8qedk67p4yg0lrzqmgwtzhupi813lng1n7cz4s 00:06:19.010 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:19.010 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:19.010 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:19.010 12:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.010 [2024-07-15 12:48:34.939264] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.010 [2024-07-15 12:48:34.939355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62879 ] 00:06:19.010 { 00:06:19.010 "subsystems": [ 00:06:19.010 { 00:06:19.010 "subsystem": "bdev", 00:06:19.010 "config": [ 00:06:19.010 { 00:06:19.010 "params": { 00:06:19.010 "trtype": "pcie", 00:06:19.010 "traddr": "0000:00:10.0", 00:06:19.010 "name": "Nvme0" 00:06:19.010 }, 00:06:19.010 "method": "bdev_nvme_attach_controller" 00:06:19.010 }, 00:06:19.010 { 00:06:19.010 "method": "bdev_wait_for_examine" 00:06:19.010 } 00:06:19.010 ] 00:06:19.010 } 00:06:19.010 ] 00:06:19.010 } 00:06:19.269 [2024-07-15 12:48:35.069554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.269 [2024-07-15 12:48:35.220605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.269 [2024-07-15 12:48:35.279173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.785  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:19.785 00:06:19.785 12:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:19.785 12:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:19.785 12:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:19.785 12:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.785 [2024-07-15 12:48:35.691436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.785 [2024-07-15 12:48:35.691546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62892 ] 00:06:19.785 { 00:06:19.785 "subsystems": [ 00:06:19.785 { 00:06:19.785 "subsystem": "bdev", 00:06:19.785 "config": [ 00:06:19.785 { 00:06:19.785 "params": { 00:06:19.785 "trtype": "pcie", 00:06:19.785 "traddr": "0000:00:10.0", 00:06:19.785 "name": "Nvme0" 00:06:19.785 }, 00:06:19.786 "method": "bdev_nvme_attach_controller" 00:06:19.786 }, 00:06:19.786 { 00:06:19.786 "method": "bdev_wait_for_examine" 00:06:19.786 } 00:06:19.786 ] 00:06:19.786 } 00:06:19.786 ] 00:06:19.786 } 00:06:19.786 [2024-07-15 12:48:35.829638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.045 [2024-07-15 12:48:35.948868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.045 [2024-07-15 12:48:36.003535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.306  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:20.306 00:06:20.306 12:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:20.306 ************************************ 00:06:20.306 END TEST dd_rw_offset 00:06:20.306 ************************************ 00:06:20.307 12:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ o8h7hazvvmcdgwovybmblefog03sta7jea537nbpiaomnn9t3pu9xl8bui18k66dgbz348hoikyi7mrwc0vomcvmcaowetgbfdllvfdyv25hleq22iexcriulhzdaqi7z8gffvc7gjuid4bm9ck18egrilyc45qt2x8vql32fd9azdovhc72u35cgrg6t4kdsa49699jx3jv82spieedvhtjs2vy4zfopit7btcikqk9ejmw422g6lngrzvtrm2od20jmpp1yz7kv5jb6q0jfrbuidqy0l4xvrmu1tbm2rd6k9m8tbwq25ahy4g6w6r66p7jm1o8xcx6yv5rbii894otv3xg2kn4uofwpqbhxv5v90gsxltu8xkun4bqm679zij49xgbj36w12xwtfnkomwg4xuuwihvfxoqr3xmbgn9wju5rpljbpajikcd0n5k881sl8hdvwksr0rcjybnpz1bpc3xxa80vbgk5rpvhm5s3798k4m4xno1440ijh327pecb81b78f9wkxmn768j0xd3pictpjknrv7mc91gqgmpx2cisyupgcdm1u0x1wk9zj3g1jjdfpwd9pqeyksyyijoygdg7upt5r4gnx7xmfvivbv54eme6qi6o54ehp8f41ptgvut4rn0uar9yu1iwuhovnvs1vdvi7d2j1sieqzsj95biowr09qnlh4vt3vkuy5ecmqmt8qiydfu6t4tl7yya4pnwr5bppcdc6ufsg96we1pgj7v72pl6i724uowf2za76ztu3bnw0uvn3x2q0fkddjzcmxxxohplc28iuxy68ep5t8n6wh948wqhgn7gp47vwk90fypwr8xxj80ofmopn9tyb0hr6gm1rlsdncyucybziz5fr5sas8wc70uq1iiw7n0qg6dfd8ajl533e8xpc42hrbzegn6atuplt9uo4fphb2nqpxwjptu40w4zlwmojttp2nsjhdkc5sxcj1z2g8si44gzt3tumvducuh8b7qft9oabzikrdblttg2os76s3vj19wzp65rlvr0ikhaohbuop2fpi5q9c4037awtu8hm58asoaycqx6drxbve0t8438p6wkjt9m7cy2f7mlsiixnrb3e70p1cqei9e7bv8dx4h8rnhavg5wrzmix1xdy5f2tg4jmrbc9htbgk1ptrui18f2vn5ouy8goui54pajr0dui8p31m7qgfm5cdw54d12b8dxccn0iuraxntyalh5gguas7ckz6k11d0mjslk7gz9rvku54vk6zl2r042zwixr261265zvb5dv26npltf9grqm7t4m0r51es10dxb67ul7xitz5tijdy51hqt9bbziihg4ju94d6swdkza3bzzb3vk8w78gqb2t24xj44ngwvdyydfebovtfv8cvs9qlyga5wy0fopf5htblec65awaric88c2f8stzf9km2xgb2ifwmrbz7lbowdvoetqc586972e96ykqoblib4qlqdzxespnj27i3s7ajvx4xfomk03unhekvj7nd2inobmrgd3ndwjj6bi72bj2i7ady0s3dpkel841l77n0vjqlyo6q7uff09gbqgir5bct74sn88ue4q8uz65qfuoqb4l8l8i4zc5aqkg8trmyi9ijhnvgkg93uw16hd6vtjx7gkcp5nrtkwyrvo8deoinngu9vlyo2z5vfm1zwiph6buk742xr4cbvyx75rq5yfi4thdyga6teycv3bw1q4ovypr5q2eh47x4lnmagkvm1wtm3n7qa9jmzmag0fxa2e1e2q083qssvim6nelol15i42gd5z2dv20oeljc6vo2gbrem0f2llyewnmjwbnmfw2mm1jd32v88xeoyjbblirv3hdahhrzgvcit0zvj9e2tjolsf950yuuzx1n7kolf74eqjxlfe7qzz2rmg7wgmd2eyo5f6ybjqo1os0ydlw9rz2qotvgmg0zo4986zug8ueczvtoa1wbbvymq5yilhdmmh6d25mkvg0q8fm4up3a604p4nzawsmvwbvc46x8spjz6z3gtj42052p1qal0rbbrzunib3qw663q8oty0bmu18lx4jewoq3wzhmenwxdyetf7jtv1ubp2u17do08xxnh4wfzdnw6c41ope84nvhrlvkm5eni3ae5hzgf2jn24aaya42zs9ijwji17nbkl91flle93cnitm2vs36q0y1vdt1inlj5mji2tojdwqvahg7xrosmrap8oftroulr3zv23osdutxtq2w1mj4128967k8g5636qksc2jhivlxkijkwkdkduf0b57mnj5b8onbctn4iw012wrtrivv8cr8s4nuagap39cqlty3mknip7wlbkunn2t5i2uvpvfavjohsc3i1b7yr7le8dsa80u25h5j6kzp7fza8pyzqplxq8o5dko5mscnmt22y9bt5ghzomvsuqp5cudd4ic2jg34y6i6sgdg2egxm24wv3rifqof0qsd35zwcn49adh96cxbyhoxw2evjuyz5sfpx0c1sq3vz6kh5v1jx82c9r7fohd7uoriycqaexyoj7fho4dq5xk1orbyt59bx4o4z2hy4hs3gan8ej2gxuiszagzjh45ak705ad0mfkhsai7ug7njpzaj6ljyu3hfphqik7msxjo4rzidynf6peopetpm45b2ps5qwwxls7vuufmcxk5tcqhmcnli2fayohj7y1hnwwanmcu518ngkppfxhpj30po1vgxx0coajhix2le8ak9vnvvpyfm2lw1ub8hxw15qqsz8rhna1y9ma5qzev3pa5cewqa89q60j9ckhe90bcghfgbj29xwq5n1a7hjx37cucqfgp6o3dvjglkzuj7enrhycshpuewz7yyd5uzlrcfm5imznlqiwpltt7i6b8cc9h4ye9l1adfhr507feepwo72um9612hlmnkxecwm5i239j5bgdu7ipxaliafqiibgnemmdzw2ueau9fdd1t6nfaf4a5iiaknrtofkhb15b2fqx76ke5tu3r41ck2qflczodepvvp24r489j0innfqqpmyaz0fqlakyser8qw0r5s0ss5bcm9znim1oa1e7okxwqtuza9oa19w8i5w4l8ybwdhh5luskraunqlg840mmwoznd6dottwyyegacrryz26w2jfc3scyajryohro5cxcieijete2gut1qhuv0mtesili5e9vsicpkeg6hpg1qpi2lzeckmiu27w2sxrdzmehb8k9anxdh08eac95ky15t5ckrulvm414jt6guwsmft8h2a8ezb5751e8j2d0q0dpmk8mlwnpzvm6cvjpw0yfa7tzvznsg0439hyyqzo7z6jxebvr2l7m6gipvodc6qul9ar8j05w1q9au1qk0h5hy8h7p4simhjvne6966ho3whsxz4rdo0rvu7e15c6aw8jzn9b6jin4e72nko2o3zlog64k64ztbgilc1ondz2maowuibiw4qolybsnhqfgd444cxna4iblj75tp67fjltcf8mx1ffflxrbjcmw1h7m2ft8n7qq9ughd24b84xz9v0yzz2kou0vlgn0i0c1cukzq5kh5zrh9kayyf9te9866zje66sb7c7sfs3wdn8k6b77j5sclpvzpfojpsbyhe6b4iotkjbxm10df1hecsj3z952thkw43qtrdd88x3bzl2d7i899c1wcnymir2v4p7qg4yhyjezc9vr3b1d3zlib41xafsbnzfp2n3r46u0opzc76u30cfy477uswxtcdrc1n368dl7y7fst98yn1p93sn7fm3ha0lrlxoh9h9a8jpa6mrfgd58m9o0fznkadjabykqxr993cvd7985cwb321y0gh75zogk2qxg5r1cq43n0o4tqb7zcy806leb5121qon4mwdo6vog4ghklyxhl2zgbdg8nv9mptosgu7z16g6qj7o35kf6rna1gks27p6uq24azxz94fsr0zvf3coq9wa21m8gfeedozxml772mm0uz8q8lmj0xvqny3f0sm8qedk67p4yg0lrzqmgwtzhupi813lng1n7cz4s == \o\8\h\7\h\a\z\v\v\m\c\d\g\w\o\v\y\b\m\b\l\e\f\o\g\0\3\s\t\a\7\j\e\a\5\3\7\n\b\p\i\a\o\m\n\n\9\t\3\p\u\9\x\l\8\b\u\i\1\8\k\6\6\d\g\b\z\3\4\8\h\o\i\k\y\i\7\m\r\w\c\0\v\o\m\c\v\m\c\a\o\w\e\t\g\b\f\d\l\l\v\f\d\y\v\2\5\h\l\e\q\2\2\i\e\x\c\r\i\u\l\h\z\d\a\q\i\7\z\8\g\f\f\v\c\7\g\j\u\i\d\4\b\m\9\c\k\1\8\e\g\r\i\l\y\c\4\5\q\t\2\x\8\v\q\l\3\2\f\d\9\a\z\d\o\v\h\c\7\2\u\3\5\c\g\r\g\6\t\4\k\d\s\a\4\9\6\9\9\j\x\3\j\v\8\2\s\p\i\e\e\d\v\h\t\j\s\2\v\y\4\z\f\o\p\i\t\7\b\t\c\i\k\q\k\9\e\j\m\w\4\2\2\g\6\l\n\g\r\z\v\t\r\m\2\o\d\2\0\j\m\p\p\1\y\z\7\k\v\5\j\b\6\q\0\j\f\r\b\u\i\d\q\y\0\l\4\x\v\r\m\u\1\t\b\m\2\r\d\6\k\9\m\8\t\b\w\q\2\5\a\h\y\4\g\6\w\6\r\6\6\p\7\j\m\1\o\8\x\c\x\6\y\v\5\r\b\i\i\8\9\4\o\t\v\3\x\g\2\k\n\4\u\o\f\w\p\q\b\h\x\v\5\v\9\0\g\s\x\l\t\u\8\x\k\u\n\4\b\q\m\6\7\9\z\i\j\4\9\x\g\b\j\3\6\w\1\2\x\w\t\f\n\k\o\m\w\g\4\x\u\u\w\i\h\v\f\x\o\q\r\3\x\m\b\g\n\9\w\j\u\5\r\p\l\j\b\p\a\j\i\k\c\d\0\n\5\k\8\8\1\s\l\8\h\d\v\w\k\s\r\0\r\c\j\y\b\n\p\z\1\b\p\c\3\x\x\a\8\0\v\b\g\k\5\r\p\v\h\m\5\s\3\7\9\8\k\4\m\4\x\n\o\1\4\4\0\i\j\h\3\2\7\p\e\c\b\8\1\b\7\8\f\9\w\k\x\m\n\7\6\8\j\0\x\d\3\p\i\c\t\p\j\k\n\r\v\7\m\c\9\1\g\q\g\m\p\x\2\c\i\s\y\u\p\g\c\d\m\1\u\0\x\1\w\k\9\z\j\3\g\1\j\j\d\f\p\w\d\9\p\q\e\y\k\s\y\y\i\j\o\y\g\d\g\7\u\p\t\5\r\4\g\n\x\7\x\m\f\v\i\v\b\v\5\4\e\m\e\6\q\i\6\o\5\4\e\h\p\8\f\4\1\p\t\g\v\u\t\4\r\n\0\u\a\r\9\y\u\1\i\w\u\h\o\v\n\v\s\1\v\d\v\i\7\d\2\j\1\s\i\e\q\z\s\j\9\5\b\i\o\w\r\0\9\q\n\l\h\4\v\t\3\v\k\u\y\5\e\c\m\q\m\t\8\q\i\y\d\f\u\6\t\4\t\l\7\y\y\a\4\p\n\w\r\5\b\p\p\c\d\c\6\u\f\s\g\9\6\w\e\1\p\g\j\7\v\7\2\p\l\6\i\7\2\4\u\o\w\f\2\z\a\7\6\z\t\u\3\b\n\w\0\u\v\n\3\x\2\q\0\f\k\d\d\j\z\c\m\x\x\x\o\h\p\l\c\2\8\i\u\x\y\6\8\e\p\5\t\8\n\6\w\h\9\4\8\w\q\h\g\n\7\g\p\4\7\v\w\k\9\0\f\y\p\w\r\8\x\x\j\8\0\o\f\m\o\p\n\9\t\y\b\0\h\r\6\g\m\1\r\l\s\d\n\c\y\u\c\y\b\z\i\z\5\f\r\5\s\a\s\8\w\c\7\0\u\q\1\i\i\w\7\n\0\q\g\6\d\f\d\8\a\j\l\5\3\3\e\8\x\p\c\4\2\h\r\b\z\e\g\n\6\a\t\u\p\l\t\9\u\o\4\f\p\h\b\2\n\q\p\x\w\j\p\t\u\4\0\w\4\z\l\w\m\o\j\t\t\p\2\n\s\j\h\d\k\c\5\s\x\c\j\1\z\2\g\8\s\i\4\4\g\z\t\3\t\u\m\v\d\u\c\u\h\8\b\7\q\f\t\9\o\a\b\z\i\k\r\d\b\l\t\t\g\2\o\s\7\6\s\3\v\j\1\9\w\z\p\6\5\r\l\v\r\0\i\k\h\a\o\h\b\u\o\p\2\f\p\i\5\q\9\c\4\0\3\7\a\w\t\u\8\h\m\5\8\a\s\o\a\y\c\q\x\6\d\r\x\b\v\e\0\t\8\4\3\8\p\6\w\k\j\t\9\m\7\c\y\2\f\7\m\l\s\i\i\x\n\r\b\3\e\7\0\p\1\c\q\e\i\9\e\7\b\v\8\d\x\4\h\8\r\n\h\a\v\g\5\w\r\z\m\i\x\1\x\d\y\5\f\2\t\g\4\j\m\r\b\c\9\h\t\b\g\k\1\p\t\r\u\i\1\8\f\2\v\n\5\o\u\y\8\g\o\u\i\5\4\p\a\j\r\0\d\u\i\8\p\3\1\m\7\q\g\f\m\5\c\d\w\5\4\d\1\2\b\8\d\x\c\c\n\0\i\u\r\a\x\n\t\y\a\l\h\5\g\g\u\a\s\7\c\k\z\6\k\1\1\d\0\m\j\s\l\k\7\g\z\9\r\v\k\u\5\4\v\k\6\z\l\2\r\0\4\2\z\w\i\x\r\2\6\1\2\6\5\z\v\b\5\d\v\2\6\n\p\l\t\f\9\g\r\q\m\7\t\4\m\0\r\5\1\e\s\1\0\d\x\b\6\7\u\l\7\x\i\t\z\5\t\i\j\d\y\5\1\h\q\t\9\b\b\z\i\i\h\g\4\j\u\9\4\d\6\s\w\d\k\z\a\3\b\z\z\b\3\v\k\8\w\7\8\g\q\b\2\t\2\4\x\j\4\4\n\g\w\v\d\y\y\d\f\e\b\o\v\t\f\v\8\c\v\s\9\q\l\y\g\a\5\w\y\0\f\o\p\f\5\h\t\b\l\e\c\6\5\a\w\a\r\i\c\8\8\c\2\f\8\s\t\z\f\9\k\m\2\x\g\b\2\i\f\w\m\r\b\z\7\l\b\o\w\d\v\o\e\t\q\c\5\8\6\9\7\2\e\9\6\y\k\q\o\b\l\i\b\4\q\l\q\d\z\x\e\s\p\n\j\2\7\i\3\s\7\a\j\v\x\4\x\f\o\m\k\0\3\u\n\h\e\k\v\j\7\n\d\2\i\n\o\b\m\r\g\d\3\n\d\w\j\j\6\b\i\7\2\b\j\2\i\7\a\d\y\0\s\3\d\p\k\e\l\8\4\1\l\7\7\n\0\v\j\q\l\y\o\6\q\7\u\f\f\0\9\g\b\q\g\i\r\5\b\c\t\7\4\s\n\8\8\u\e\4\q\8\u\z\6\5\q\f\u\o\q\b\4\l\8\l\8\i\4\z\c\5\a\q\k\g\8\t\r\m\y\i\9\i\j\h\n\v\g\k\g\9\3\u\w\1\6\h\d\6\v\t\j\x\7\g\k\c\p\5\n\r\t\k\w\y\r\v\o\8\d\e\o\i\n\n\g\u\9\v\l\y\o\2\z\5\v\f\m\1\z\w\i\p\h\6\b\u\k\7\4\2\x\r\4\c\b\v\y\x\7\5\r\q\5\y\f\i\4\t\h\d\y\g\a\6\t\e\y\c\v\3\b\w\1\q\4\o\v\y\p\r\5\q\2\e\h\4\7\x\4\l\n\m\a\g\k\v\m\1\w\t\m\3\n\7\q\a\9\j\m\z\m\a\g\0\f\x\a\2\e\1\e\2\q\0\8\3\q\s\s\v\i\m\6\n\e\l\o\l\1\5\i\4\2\g\d\5\z\2\d\v\2\0\o\e\l\j\c\6\v\o\2\g\b\r\e\m\0\f\2\l\l\y\e\w\n\m\j\w\b\n\m\f\w\2\m\m\1\j\d\3\2\v\8\8\x\e\o\y\j\b\b\l\i\r\v\3\h\d\a\h\h\r\z\g\v\c\i\t\0\z\v\j\9\e\2\t\j\o\l\s\f\9\5\0\y\u\u\z\x\1\n\7\k\o\l\f\7\4\e\q\j\x\l\f\e\7\q\z\z\2\r\m\g\7\w\g\m\d\2\e\y\o\5\f\6\y\b\j\q\o\1\o\s\0\y\d\l\w\9\r\z\2\q\o\t\v\g\m\g\0\z\o\4\9\8\6\z\u\g\8\u\e\c\z\v\t\o\a\1\w\b\b\v\y\m\q\5\y\i\l\h\d\m\m\h\6\d\2\5\m\k\v\g\0\q\8\f\m\4\u\p\3\a\6\0\4\p\4\n\z\a\w\s\m\v\w\b\v\c\4\6\x\8\s\p\j\z\6\z\3\g\t\j\4\2\0\5\2\p\1\q\a\l\0\r\b\b\r\z\u\n\i\b\3\q\w\6\6\3\q\8\o\t\y\0\b\m\u\1\8\l\x\4\j\e\w\o\q\3\w\z\h\m\e\n\w\x\d\y\e\t\f\7\j\t\v\1\u\b\p\2\u\1\7\d\o\0\8\x\x\n\h\4\w\f\z\d\n\w\6\c\4\1\o\p\e\8\4\n\v\h\r\l\v\k\m\5\e\n\i\3\a\e\5\h\z\g\f\2\j\n\2\4\a\a\y\a\4\2\z\s\9\i\j\w\j\i\1\7\n\b\k\l\9\1\f\l\l\e\9\3\c\n\i\t\m\2\v\s\3\6\q\0\y\1\v\d\t\1\i\n\l\j\5\m\j\i\2\t\o\j\d\w\q\v\a\h\g\7\x\r\o\s\m\r\a\p\8\o\f\t\r\o\u\l\r\3\z\v\2\3\o\s\d\u\t\x\t\q\2\w\1\m\j\4\1\2\8\9\6\7\k\8\g\5\6\3\6\q\k\s\c\2\j\h\i\v\l\x\k\i\j\k\w\k\d\k\d\u\f\0\b\5\7\m\n\j\5\b\8\o\n\b\c\t\n\4\i\w\0\1\2\w\r\t\r\i\v\v\8\c\r\8\s\4\n\u\a\g\a\p\3\9\c\q\l\t\y\3\m\k\n\i\p\7\w\l\b\k\u\n\n\2\t\5\i\2\u\v\p\v\f\a\v\j\o\h\s\c\3\i\1\b\7\y\r\7\l\e\8\d\s\a\8\0\u\2\5\h\5\j\6\k\z\p\7\f\z\a\8\p\y\z\q\p\l\x\q\8\o\5\d\k\o\5\m\s\c\n\m\t\2\2\y\9\b\t\5\g\h\z\o\m\v\s\u\q\p\5\c\u\d\d\4\i\c\2\j\g\3\4\y\6\i\6\s\g\d\g\2\e\g\x\m\2\4\w\v\3\r\i\f\q\o\f\0\q\s\d\3\5\z\w\c\n\4\9\a\d\h\9\6\c\x\b\y\h\o\x\w\2\e\v\j\u\y\z\5\s\f\p\x\0\c\1\s\q\3\v\z\6\k\h\5\v\1\j\x\8\2\c\9\r\7\f\o\h\d\7\u\o\r\i\y\c\q\a\e\x\y\o\j\7\f\h\o\4\d\q\5\x\k\1\o\r\b\y\t\5\9\b\x\4\o\4\z\2\h\y\4\h\s\3\g\a\n\8\e\j\2\g\x\u\i\s\z\a\g\z\j\h\4\5\a\k\7\0\5\a\d\0\m\f\k\h\s\a\i\7\u\g\7\n\j\p\z\a\j\6\l\j\y\u\3\h\f\p\h\q\i\k\7\m\s\x\j\o\4\r\z\i\d\y\n\f\6\p\e\o\p\e\t\p\m\4\5\b\2\p\s\5\q\w\w\x\l\s\7\v\u\u\f\m\c\x\k\5\t\c\q\h\m\c\n\l\i\2\f\a\y\o\h\j\7\y\1\h\n\w\w\a\n\m\c\u\5\1\8\n\g\k\p\p\f\x\h\p\j\3\0\p\o\1\v\g\x\x\0\c\o\a\j\h\i\x\2\l\e\8\a\k\9\v\n\v\v\p\y\f\m\2\l\w\1\u\b\8\h\x\w\1\5\q\q\s\z\8\r\h\n\a\1\y\9\m\a\5\q\z\e\v\3\p\a\5\c\e\w\q\a\8\9\q\6\0\j\9\c\k\h\e\9\0\b\c\g\h\f\g\b\j\2\9\x\w\q\5\n\1\a\7\h\j\x\3\7\c\u\c\q\f\g\p\6\o\3\d\v\j\g\l\k\z\u\j\7\e\n\r\h\y\c\s\h\p\u\e\w\z\7\y\y\d\5\u\z\l\r\c\f\m\5\i\m\z\n\l\q\i\w\p\l\t\t\7\i\6\b\8\c\c\9\h\4\y\e\9\l\1\a\d\f\h\r\5\0\7\f\e\e\p\w\o\7\2\u\m\9\6\1\2\h\l\m\n\k\x\e\c\w\m\5\i\2\3\9\j\5\b\g\d\u\7\i\p\x\a\l\i\a\f\q\i\i\b\g\n\e\m\m\d\z\w\2\u\e\a\u\9\f\d\d\1\t\6\n\f\a\f\4\a\5\i\i\a\k\n\r\t\o\f\k\h\b\1\5\b\2\f\q\x\7\6\k\e\5\t\u\3\r\4\1\c\k\2\q\f\l\c\z\o\d\e\p\v\v\p\2\4\r\4\8\9\j\0\i\n\n\f\q\q\p\m\y\a\z\0\f\q\l\a\k\y\s\e\r\8\q\w\0\r\5\s\0\s\s\5\b\c\m\9\z\n\i\m\1\o\a\1\e\7\o\k\x\w\q\t\u\z\a\9\o\a\1\9\w\8\i\5\w\4\l\8\y\b\w\d\h\h\5\l\u\s\k\r\a\u\n\q\l\g\8\4\0\m\m\w\o\z\n\d\6\d\o\t\t\w\y\y\e\g\a\c\r\r\y\z\2\6\w\2\j\f\c\3\s\c\y\a\j\r\y\o\h\r\o\5\c\x\c\i\e\i\j\e\t\e\2\g\u\t\1\q\h\u\v\0\m\t\e\s\i\l\i\5\e\9\v\s\i\c\p\k\e\g\6\h\p\g\1\q\p\i\2\l\z\e\c\k\m\i\u\2\7\w\2\s\x\r\d\z\m\e\h\b\8\k\9\a\n\x\d\h\0\8\e\a\c\9\5\k\y\1\5\t\5\c\k\r\u\l\v\m\4\1\4\j\t\6\g\u\w\s\m\f\t\8\h\2\a\8\e\z\b\5\7\5\1\e\8\j\2\d\0\q\0\d\p\m\k\8\m\l\w\n\p\z\v\m\6\c\v\j\p\w\0\y\f\a\7\t\z\v\z\n\s\g\0\4\3\9\h\y\y\q\z\o\7\z\6\j\x\e\b\v\r\2\l\7\m\6\g\i\p\v\o\d\c\6\q\u\l\9\a\r\8\j\0\5\w\1\q\9\a\u\1\q\k\0\h\5\h\y\8\h\7\p\4\s\i\m\h\j\v\n\e\6\9\6\6\h\o\3\w\h\s\x\z\4\r\d\o\0\r\v\u\7\e\1\5\c\6\a\w\8\j\z\n\9\b\6\j\i\n\4\e\7\2\n\k\o\2\o\3\z\l\o\g\6\4\k\6\4\z\t\b\g\i\l\c\1\o\n\d\z\2\m\a\o\w\u\i\b\i\w\4\q\o\l\y\b\s\n\h\q\f\g\d\4\4\4\c\x\n\a\4\i\b\l\j\7\5\t\p\6\7\f\j\l\t\c\f\8\m\x\1\f\f\f\l\x\r\b\j\c\m\w\1\h\7\m\2\f\t\8\n\7\q\q\9\u\g\h\d\2\4\b\8\4\x\z\9\v\0\y\z\z\2\k\o\u\0\v\l\g\n\0\i\0\c\1\c\u\k\z\q\5\k\h\5\z\r\h\9\k\a\y\y\f\9\t\e\9\8\6\6\z\j\e\6\6\s\b\7\c\7\s\f\s\3\w\d\n\8\k\6\b\7\7\j\5\s\c\l\p\v\z\p\f\o\j\p\s\b\y\h\e\6\b\4\i\o\t\k\j\b\x\m\1\0\d\f\1\h\e\c\s\j\3\z\9\5\2\t\h\k\w\4\3\q\t\r\d\d\8\8\x\3\b\z\l\2\d\7\i\8\9\9\c\1\w\c\n\y\m\i\r\2\v\4\p\7\q\g\4\y\h\y\j\e\z\c\9\v\r\3\b\1\d\3\z\l\i\b\4\1\x\a\f\s\b\n\z\f\p\2\n\3\r\4\6\u\0\o\p\z\c\7\6\u\3\0\c\f\y\4\7\7\u\s\w\x\t\c\d\r\c\1\n\3\6\8\d\l\7\y\7\f\s\t\9\8\y\n\1\p\9\3\s\n\7\f\m\3\h\a\0\l\r\l\x\o\h\9\h\9\a\8\j\p\a\6\m\r\f\g\d\5\8\m\9\o\0\f\z\n\k\a\d\j\a\b\y\k\q\x\r\9\9\3\c\v\d\7\9\8\5\c\w\b\3\2\1\y\0\g\h\7\5\z\o\g\k\2\q\x\g\5\r\1\c\q\4\3\n\0\o\4\t\q\b\7\z\c\y\8\0\6\l\e\b\5\1\2\1\q\o\n\4\m\w\d\o\6\v\o\g\4\g\h\k\l\y\x\h\l\2\z\g\b\d\g\8\n\v\9\m\p\t\o\s\g\u\7\z\1\6\g\6\q\j\7\o\3\5\k\f\6\r\n\a\1\g\k\s\2\7\p\6\u\q\2\4\a\z\x\z\9\4\f\s\r\0\z\v\f\3\c\o\q\9\w\a\2\1\m\8\g\f\e\e\d\o\z\x\m\l\7\7\2\m\m\0\u\z\8\q\8\l\m\j\0\x\v\q\n\y\3\f\0\s\m\8\q\e\d\k\6\7\p\4\y\g\0\l\r\z\q\m\g\w\t\z\h\u\p\i\8\1\3\l\n\g\1\n\7\c\z\4\s ]] 00:06:20.307 00:06:20.307 real 0m1.505s 00:06:20.307 user 0m1.064s 00:06:20.307 sys 0m0.630s 00:06:20.307 12:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.307 12:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.565 12:48:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.565 [2024-07-15 12:48:36.432430] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:20.565 [2024-07-15 12:48:36.433173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62922 ] 00:06:20.565 { 00:06:20.565 "subsystems": [ 00:06:20.565 { 00:06:20.565 "subsystem": "bdev", 00:06:20.565 "config": [ 00:06:20.565 { 00:06:20.565 "params": { 00:06:20.565 "trtype": "pcie", 00:06:20.565 "traddr": "0000:00:10.0", 00:06:20.565 "name": "Nvme0" 00:06:20.565 }, 00:06:20.565 "method": "bdev_nvme_attach_controller" 00:06:20.565 }, 00:06:20.565 { 00:06:20.565 "method": "bdev_wait_for_examine" 00:06:20.565 } 00:06:20.565 ] 00:06:20.565 } 00:06:20.565 ] 00:06:20.565 } 00:06:20.565 [2024-07-15 12:48:36.565573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.832 [2024-07-15 12:48:36.683584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.832 [2024-07-15 12:48:36.738259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.093  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:21.093 00:06:21.093 12:48:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.093 ************************************ 00:06:21.093 END TEST spdk_dd_basic_rw 00:06:21.093 ************************************ 00:06:21.093 00:06:21.093 real 0m19.385s 00:06:21.093 user 0m14.202s 00:06:21.093 sys 0m6.648s 00:06:21.093 12:48:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.093 12:48:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.093 12:48:37 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:21.093 12:48:37 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:21.093 12:48:37 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.093 12:48:37 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.093 12:48:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:21.093 ************************************ 00:06:21.093 START TEST spdk_dd_posix 00:06:21.093 ************************************ 00:06:21.093 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:21.352 * Looking for test storage... 00:06:21.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:21.352 * First test run, liburing in use 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 ************************************ 00:06:21.352 START TEST dd_flag_append 00:06:21.352 ************************************ 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=lo2igonwiezvn2kllzpzxkzc63nm8hcn 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=pztvbjcm8jl39kd4mpt6rw960c3u47w8 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s lo2igonwiezvn2kllzpzxkzc63nm8hcn 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s pztvbjcm8jl39kd4mpt6rw960c3u47w8 00:06:21.352 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:21.352 [2024-07-15 12:48:37.313762] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:21.352 [2024-07-15 12:48:37.313919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62986 ] 00:06:21.610 [2024-07-15 12:48:37.456510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.610 [2024-07-15 12:48:37.586568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.610 [2024-07-15 12:48:37.642083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.869  Copying: 32/32 [B] (average 31 kBps) 00:06:21.869 00:06:21.869 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ pztvbjcm8jl39kd4mpt6rw960c3u47w8lo2igonwiezvn2kllzpzxkzc63nm8hcn == \p\z\t\v\b\j\c\m\8\j\l\3\9\k\d\4\m\p\t\6\r\w\9\6\0\c\3\u\4\7\w\8\l\o\2\i\g\o\n\w\i\e\z\v\n\2\k\l\l\z\p\z\x\k\z\c\6\3\n\m\8\h\c\n ]] 00:06:21.869 00:06:21.869 real 0m0.663s 00:06:21.869 user 0m0.396s 00:06:21.869 sys 0m0.282s 00:06:21.869 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.869 ************************************ 00:06:21.869 END TEST dd_flag_append 00:06:21.869 ************************************ 00:06:21.869 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:22.127 ************************************ 00:06:22.127 START TEST dd_flag_directory 00:06:22.127 ************************************ 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.127 12:48:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.127 [2024-07-15 12:48:37.990848] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:22.127 [2024-07-15 12:48:37.990939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63020 ] 00:06:22.127 [2024-07-15 12:48:38.127060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.385 [2024-07-15 12:48:38.243796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.385 [2024-07-15 12:48:38.295642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.385 [2024-07-15 12:48:38.329740] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.385 [2024-07-15 12:48:38.329796] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.385 [2024-07-15 12:48:38.329812] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.385 [2024-07-15 12:48:38.441499] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.652 12:48:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.652 [2024-07-15 12:48:38.596984] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:22.652 [2024-07-15 12:48:38.597090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63024 ] 00:06:22.936 [2024-07-15 12:48:38.733516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.936 [2024-07-15 12:48:38.849768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.936 [2024-07-15 12:48:38.901536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.936 [2024-07-15 12:48:38.936644] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.936 [2024-07-15 12:48:38.936698] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.936 [2024-07-15 12:48:38.936713] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.194 [2024-07-15 12:48:39.047176] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.194 00:06:23.194 real 0m1.207s 00:06:23.194 user 0m0.719s 00:06:23.194 sys 0m0.276s 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.194 ************************************ 00:06:23.194 END TEST dd_flag_directory 00:06:23.194 ************************************ 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:23.194 ************************************ 00:06:23.194 START TEST dd_flag_nofollow 00:06:23.194 ************************************ 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:23.194 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:23.195 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.195 [2024-07-15 12:48:39.252982] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:23.195 [2024-07-15 12:48:39.253079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63058 ] 00:06:23.453 [2024-07-15 12:48:39.384436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.453 [2024-07-15 12:48:39.498553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.711 [2024-07-15 12:48:39.552151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.711 [2024-07-15 12:48:39.587168] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.711 [2024-07-15 12:48:39.587232] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.711 [2024-07-15 12:48:39.587248] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.711 [2024-07-15 12:48:39.703526] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.970 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:23.971 12:48:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:23.971 [2024-07-15 12:48:39.864313] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:23.971 [2024-07-15 12:48:39.864429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63067 ] 00:06:23.971 [2024-07-15 12:48:39.996986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.229 [2024-07-15 12:48:40.112527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.229 [2024-07-15 12:48:40.166386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.229 [2024-07-15 12:48:40.200996] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:24.229 [2024-07-15 12:48:40.201047] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:24.229 [2024-07-15 12:48:40.201063] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.488 [2024-07-15 12:48:40.316461] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:24.488 12:48:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.488 [2024-07-15 12:48:40.470656] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:24.488 [2024-07-15 12:48:40.470745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63075 ] 00:06:24.747 [2024-07-15 12:48:40.604853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.747 [2024-07-15 12:48:40.721826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.747 [2024-07-15 12:48:40.774821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.009  Copying: 512/512 [B] (average 500 kBps) 00:06:25.009 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ m58wcc6x2dttzasacaxvltf6lijelhmdv6u2qzcn3pit6sh2hv51jaaj144ihgt67ahzxnqng1c8u8cnmar87ejf3ipctmn8eorffuss8h54xbu0wt9wsftm3goji88mcoe6dtld2bbbegv7q0kwd1xr0zutf5891ni6u8sd3o6ivl3brc6odi9b6czcm667pb91vibt2iz1ai6l9852vluov7g89l4l2uxkj5nr7e2pyjf6pg6gm68fin91jepbex7ljvmfdgr6oclto4o1ls5kwrhxx3g188ouhp81kw2vupzw4pqrn2f9rn549ysdz94ps54i01efufsj0v0b3nrnrwwbjeye2f5zdd9d8supxm62hwr4iwd3j9mttflw6u2bb41j91y714x38cx5j192ux1oprzaxewj3y1t8w337v3ve15cd8ti0yd8uh3q7g3ptkhqn3vmd8fnia4t799xv5b5o2sq92ysrkjot1f6be4cfsjgq9ga1cn4f3ss == \m\5\8\w\c\c\6\x\2\d\t\t\z\a\s\a\c\a\x\v\l\t\f\6\l\i\j\e\l\h\m\d\v\6\u\2\q\z\c\n\3\p\i\t\6\s\h\2\h\v\5\1\j\a\a\j\1\4\4\i\h\g\t\6\7\a\h\z\x\n\q\n\g\1\c\8\u\8\c\n\m\a\r\8\7\e\j\f\3\i\p\c\t\m\n\8\e\o\r\f\f\u\s\s\8\h\5\4\x\b\u\0\w\t\9\w\s\f\t\m\3\g\o\j\i\8\8\m\c\o\e\6\d\t\l\d\2\b\b\b\e\g\v\7\q\0\k\w\d\1\x\r\0\z\u\t\f\5\8\9\1\n\i\6\u\8\s\d\3\o\6\i\v\l\3\b\r\c\6\o\d\i\9\b\6\c\z\c\m\6\6\7\p\b\9\1\v\i\b\t\2\i\z\1\a\i\6\l\9\8\5\2\v\l\u\o\v\7\g\8\9\l\4\l\2\u\x\k\j\5\n\r\7\e\2\p\y\j\f\6\p\g\6\g\m\6\8\f\i\n\9\1\j\e\p\b\e\x\7\l\j\v\m\f\d\g\r\6\o\c\l\t\o\4\o\1\l\s\5\k\w\r\h\x\x\3\g\1\8\8\o\u\h\p\8\1\k\w\2\v\u\p\z\w\4\p\q\r\n\2\f\9\r\n\5\4\9\y\s\d\z\9\4\p\s\5\4\i\0\1\e\f\u\f\s\j\0\v\0\b\3\n\r\n\r\w\w\b\j\e\y\e\2\f\5\z\d\d\9\d\8\s\u\p\x\m\6\2\h\w\r\4\i\w\d\3\j\9\m\t\t\f\l\w\6\u\2\b\b\4\1\j\9\1\y\7\1\4\x\3\8\c\x\5\j\1\9\2\u\x\1\o\p\r\z\a\x\e\w\j\3\y\1\t\8\w\3\3\7\v\3\v\e\1\5\c\d\8\t\i\0\y\d\8\u\h\3\q\7\g\3\p\t\k\h\q\n\3\v\m\d\8\f\n\i\a\4\t\7\9\9\x\v\5\b\5\o\2\s\q\9\2\y\s\r\k\j\o\t\1\f\6\b\e\4\c\f\s\j\g\q\9\g\a\1\c\n\4\f\3\s\s ]] 00:06:25.010 00:06:25.010 real 0m1.824s 00:06:25.010 user 0m1.078s 00:06:25.010 sys 0m0.554s 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:25.010 ************************************ 00:06:25.010 END TEST dd_flag_nofollow 00:06:25.010 ************************************ 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.010 12:48:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.268 ************************************ 00:06:25.268 START TEST dd_flag_noatime 00:06:25.268 ************************************ 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721047720 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721047721 00:06:25.268 12:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:26.203 12:48:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.203 [2024-07-15 12:48:42.156103] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:26.203 [2024-07-15 12:48:42.156233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63124 ] 00:06:26.462 [2024-07-15 12:48:42.297266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.462 [2024-07-15 12:48:42.459354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.462 [2024-07-15 12:48:42.519518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.720  Copying: 512/512 [B] (average 500 kBps) 00:06:26.720 00:06:26.720 12:48:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.720 12:48:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721047720 )) 00:06:26.720 12:48:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.720 12:48:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721047721 )) 00:06:26.720 12:48:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.979 [2024-07-15 12:48:42.826706] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:26.979 [2024-07-15 12:48:42.826817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63132 ] 00:06:26.979 [2024-07-15 12:48:42.965989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.245 [2024-07-15 12:48:43.074824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.245 [2024-07-15 12:48:43.151417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.505  Copying: 512/512 [B] (average 500 kBps) 00:06:27.505 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721047723 )) 00:06:27.505 00:06:27.505 real 0m2.429s 00:06:27.505 user 0m0.854s 00:06:27.505 sys 0m0.668s 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:27.505 ************************************ 00:06:27.505 END TEST dd_flag_noatime 00:06:27.505 ************************************ 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.505 ************************************ 00:06:27.505 START TEST dd_flags_misc 00:06:27.505 ************************************ 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.505 12:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:27.763 [2024-07-15 12:48:43.617582] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:27.763 [2024-07-15 12:48:43.617681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63166 ] 00:06:27.763 [2024-07-15 12:48:43.755734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.070 [2024-07-15 12:48:43.901572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.070 [2024-07-15 12:48:43.973207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.329  Copying: 512/512 [B] (average 500 kBps) 00:06:28.329 00:06:28.329 12:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yptc5bckt6v4ftxch6xi0827h5jhun9cqwuaiskjacbzyu7t18v82w7tc7x6f3rd3fhzr9g77dsxncjfqvyq3qysb6dz95l4hhmbmp78w2vlc32pjtzf6a3cddlift8b7oidcoc7nd3pmv5h9zcsb7jpjx9d6y6ym6n3bkuagiar6by06nwm6pbykmhfgtjoljdux4jmr1iv5emn2f0g88j0bwyehf5zil5w17kuvmdvo0vbzianpe52ydma1ap4cgpu00zsnhmh1g9iihbypub4ijxvp1f9nkoiit6bqbawa5koqfgyc1ne022zh30pgh0zvqbzv26x48c17ec1g5frfaaebxz4b3r5mmr22vud3efyzjlfn8dfa2gsrohw2ss1gflrcxfebnk1xi6pdw6nppd8l4g5yriszp2s191tcr7qdlz3avouerc4kmrgeqpc6nfvu4iypeae12xkrr2y7weu75k3kfst0vnu4y2gzz4n94ns1hq35z8xob57 == \y\p\t\c\5\b\c\k\t\6\v\4\f\t\x\c\h\6\x\i\0\8\2\7\h\5\j\h\u\n\9\c\q\w\u\a\i\s\k\j\a\c\b\z\y\u\7\t\1\8\v\8\2\w\7\t\c\7\x\6\f\3\r\d\3\f\h\z\r\9\g\7\7\d\s\x\n\c\j\f\q\v\y\q\3\q\y\s\b\6\d\z\9\5\l\4\h\h\m\b\m\p\7\8\w\2\v\l\c\3\2\p\j\t\z\f\6\a\3\c\d\d\l\i\f\t\8\b\7\o\i\d\c\o\c\7\n\d\3\p\m\v\5\h\9\z\c\s\b\7\j\p\j\x\9\d\6\y\6\y\m\6\n\3\b\k\u\a\g\i\a\r\6\b\y\0\6\n\w\m\6\p\b\y\k\m\h\f\g\t\j\o\l\j\d\u\x\4\j\m\r\1\i\v\5\e\m\n\2\f\0\g\8\8\j\0\b\w\y\e\h\f\5\z\i\l\5\w\1\7\k\u\v\m\d\v\o\0\v\b\z\i\a\n\p\e\5\2\y\d\m\a\1\a\p\4\c\g\p\u\0\0\z\s\n\h\m\h\1\g\9\i\i\h\b\y\p\u\b\4\i\j\x\v\p\1\f\9\n\k\o\i\i\t\6\b\q\b\a\w\a\5\k\o\q\f\g\y\c\1\n\e\0\2\2\z\h\3\0\p\g\h\0\z\v\q\b\z\v\2\6\x\4\8\c\1\7\e\c\1\g\5\f\r\f\a\a\e\b\x\z\4\b\3\r\5\m\m\r\2\2\v\u\d\3\e\f\y\z\j\l\f\n\8\d\f\a\2\g\s\r\o\h\w\2\s\s\1\g\f\l\r\c\x\f\e\b\n\k\1\x\i\6\p\d\w\6\n\p\p\d\8\l\4\g\5\y\r\i\s\z\p\2\s\1\9\1\t\c\r\7\q\d\l\z\3\a\v\o\u\e\r\c\4\k\m\r\g\e\q\p\c\6\n\f\v\u\4\i\y\p\e\a\e\1\2\x\k\r\r\2\y\7\w\e\u\7\5\k\3\k\f\s\t\0\v\n\u\4\y\2\g\z\z\4\n\9\4\n\s\1\h\q\3\5\z\8\x\o\b\5\7 ]] 00:06:28.329 12:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.329 12:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:28.329 [2024-07-15 12:48:44.368445] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:28.329 [2024-07-15 12:48:44.368544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63181 ] 00:06:28.587 [2024-07-15 12:48:44.507397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.845 [2024-07-15 12:48:44.674943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.845 [2024-07-15 12:48:44.758006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.104  Copying: 512/512 [B] (average 500 kBps) 00:06:29.104 00:06:29.104 12:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yptc5bckt6v4ftxch6xi0827h5jhun9cqwuaiskjacbzyu7t18v82w7tc7x6f3rd3fhzr9g77dsxncjfqvyq3qysb6dz95l4hhmbmp78w2vlc32pjtzf6a3cddlift8b7oidcoc7nd3pmv5h9zcsb7jpjx9d6y6ym6n3bkuagiar6by06nwm6pbykmhfgtjoljdux4jmr1iv5emn2f0g88j0bwyehf5zil5w17kuvmdvo0vbzianpe52ydma1ap4cgpu00zsnhmh1g9iihbypub4ijxvp1f9nkoiit6bqbawa5koqfgyc1ne022zh30pgh0zvqbzv26x48c17ec1g5frfaaebxz4b3r5mmr22vud3efyzjlfn8dfa2gsrohw2ss1gflrcxfebnk1xi6pdw6nppd8l4g5yriszp2s191tcr7qdlz3avouerc4kmrgeqpc6nfvu4iypeae12xkrr2y7weu75k3kfst0vnu4y2gzz4n94ns1hq35z8xob57 == \y\p\t\c\5\b\c\k\t\6\v\4\f\t\x\c\h\6\x\i\0\8\2\7\h\5\j\h\u\n\9\c\q\w\u\a\i\s\k\j\a\c\b\z\y\u\7\t\1\8\v\8\2\w\7\t\c\7\x\6\f\3\r\d\3\f\h\z\r\9\g\7\7\d\s\x\n\c\j\f\q\v\y\q\3\q\y\s\b\6\d\z\9\5\l\4\h\h\m\b\m\p\7\8\w\2\v\l\c\3\2\p\j\t\z\f\6\a\3\c\d\d\l\i\f\t\8\b\7\o\i\d\c\o\c\7\n\d\3\p\m\v\5\h\9\z\c\s\b\7\j\p\j\x\9\d\6\y\6\y\m\6\n\3\b\k\u\a\g\i\a\r\6\b\y\0\6\n\w\m\6\p\b\y\k\m\h\f\g\t\j\o\l\j\d\u\x\4\j\m\r\1\i\v\5\e\m\n\2\f\0\g\8\8\j\0\b\w\y\e\h\f\5\z\i\l\5\w\1\7\k\u\v\m\d\v\o\0\v\b\z\i\a\n\p\e\5\2\y\d\m\a\1\a\p\4\c\g\p\u\0\0\z\s\n\h\m\h\1\g\9\i\i\h\b\y\p\u\b\4\i\j\x\v\p\1\f\9\n\k\o\i\i\t\6\b\q\b\a\w\a\5\k\o\q\f\g\y\c\1\n\e\0\2\2\z\h\3\0\p\g\h\0\z\v\q\b\z\v\2\6\x\4\8\c\1\7\e\c\1\g\5\f\r\f\a\a\e\b\x\z\4\b\3\r\5\m\m\r\2\2\v\u\d\3\e\f\y\z\j\l\f\n\8\d\f\a\2\g\s\r\o\h\w\2\s\s\1\g\f\l\r\c\x\f\e\b\n\k\1\x\i\6\p\d\w\6\n\p\p\d\8\l\4\g\5\y\r\i\s\z\p\2\s\1\9\1\t\c\r\7\q\d\l\z\3\a\v\o\u\e\r\c\4\k\m\r\g\e\q\p\c\6\n\f\v\u\4\i\y\p\e\a\e\1\2\x\k\r\r\2\y\7\w\e\u\7\5\k\3\k\f\s\t\0\v\n\u\4\y\2\g\z\z\4\n\9\4\n\s\1\h\q\3\5\z\8\x\o\b\5\7 ]] 00:06:29.104 12:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.104 12:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:29.104 [2024-07-15 12:48:45.153651] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:29.104 [2024-07-15 12:48:45.153730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63196 ] 00:06:29.363 [2024-07-15 12:48:45.284183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.363 [2024-07-15 12:48:45.392493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.621 [2024-07-15 12:48:45.467801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.883  Copying: 512/512 [B] (average 500 kBps) 00:06:29.883 00:06:29.883 12:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yptc5bckt6v4ftxch6xi0827h5jhun9cqwuaiskjacbzyu7t18v82w7tc7x6f3rd3fhzr9g77dsxncjfqvyq3qysb6dz95l4hhmbmp78w2vlc32pjtzf6a3cddlift8b7oidcoc7nd3pmv5h9zcsb7jpjx9d6y6ym6n3bkuagiar6by06nwm6pbykmhfgtjoljdux4jmr1iv5emn2f0g88j0bwyehf5zil5w17kuvmdvo0vbzianpe52ydma1ap4cgpu00zsnhmh1g9iihbypub4ijxvp1f9nkoiit6bqbawa5koqfgyc1ne022zh30pgh0zvqbzv26x48c17ec1g5frfaaebxz4b3r5mmr22vud3efyzjlfn8dfa2gsrohw2ss1gflrcxfebnk1xi6pdw6nppd8l4g5yriszp2s191tcr7qdlz3avouerc4kmrgeqpc6nfvu4iypeae12xkrr2y7weu75k3kfst0vnu4y2gzz4n94ns1hq35z8xob57 == \y\p\t\c\5\b\c\k\t\6\v\4\f\t\x\c\h\6\x\i\0\8\2\7\h\5\j\h\u\n\9\c\q\w\u\a\i\s\k\j\a\c\b\z\y\u\7\t\1\8\v\8\2\w\7\t\c\7\x\6\f\3\r\d\3\f\h\z\r\9\g\7\7\d\s\x\n\c\j\f\q\v\y\q\3\q\y\s\b\6\d\z\9\5\l\4\h\h\m\b\m\p\7\8\w\2\v\l\c\3\2\p\j\t\z\f\6\a\3\c\d\d\l\i\f\t\8\b\7\o\i\d\c\o\c\7\n\d\3\p\m\v\5\h\9\z\c\s\b\7\j\p\j\x\9\d\6\y\6\y\m\6\n\3\b\k\u\a\g\i\a\r\6\b\y\0\6\n\w\m\6\p\b\y\k\m\h\f\g\t\j\o\l\j\d\u\x\4\j\m\r\1\i\v\5\e\m\n\2\f\0\g\8\8\j\0\b\w\y\e\h\f\5\z\i\l\5\w\1\7\k\u\v\m\d\v\o\0\v\b\z\i\a\n\p\e\5\2\y\d\m\a\1\a\p\4\c\g\p\u\0\0\z\s\n\h\m\h\1\g\9\i\i\h\b\y\p\u\b\4\i\j\x\v\p\1\f\9\n\k\o\i\i\t\6\b\q\b\a\w\a\5\k\o\q\f\g\y\c\1\n\e\0\2\2\z\h\3\0\p\g\h\0\z\v\q\b\z\v\2\6\x\4\8\c\1\7\e\c\1\g\5\f\r\f\a\a\e\b\x\z\4\b\3\r\5\m\m\r\2\2\v\u\d\3\e\f\y\z\j\l\f\n\8\d\f\a\2\g\s\r\o\h\w\2\s\s\1\g\f\l\r\c\x\f\e\b\n\k\1\x\i\6\p\d\w\6\n\p\p\d\8\l\4\g\5\y\r\i\s\z\p\2\s\1\9\1\t\c\r\7\q\d\l\z\3\a\v\o\u\e\r\c\4\k\m\r\g\e\q\p\c\6\n\f\v\u\4\i\y\p\e\a\e\1\2\x\k\r\r\2\y\7\w\e\u\7\5\k\3\k\f\s\t\0\v\n\u\4\y\2\g\z\z\4\n\9\4\n\s\1\h\q\3\5\z\8\x\o\b\5\7 ]] 00:06:29.883 12:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.883 12:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:29.883 [2024-07-15 12:48:45.851103] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:29.883 [2024-07-15 12:48:45.851195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63200 ] 00:06:30.142 [2024-07-15 12:48:45.984103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.142 [2024-07-15 12:48:46.132358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.142 [2024-07-15 12:48:46.193839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.401  Copying: 512/512 [B] (average 250 kBps) 00:06:30.401 00:06:30.707 12:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yptc5bckt6v4ftxch6xi0827h5jhun9cqwuaiskjacbzyu7t18v82w7tc7x6f3rd3fhzr9g77dsxncjfqvyq3qysb6dz95l4hhmbmp78w2vlc32pjtzf6a3cddlift8b7oidcoc7nd3pmv5h9zcsb7jpjx9d6y6ym6n3bkuagiar6by06nwm6pbykmhfgtjoljdux4jmr1iv5emn2f0g88j0bwyehf5zil5w17kuvmdvo0vbzianpe52ydma1ap4cgpu00zsnhmh1g9iihbypub4ijxvp1f9nkoiit6bqbawa5koqfgyc1ne022zh30pgh0zvqbzv26x48c17ec1g5frfaaebxz4b3r5mmr22vud3efyzjlfn8dfa2gsrohw2ss1gflrcxfebnk1xi6pdw6nppd8l4g5yriszp2s191tcr7qdlz3avouerc4kmrgeqpc6nfvu4iypeae12xkrr2y7weu75k3kfst0vnu4y2gzz4n94ns1hq35z8xob57 == \y\p\t\c\5\b\c\k\t\6\v\4\f\t\x\c\h\6\x\i\0\8\2\7\h\5\j\h\u\n\9\c\q\w\u\a\i\s\k\j\a\c\b\z\y\u\7\t\1\8\v\8\2\w\7\t\c\7\x\6\f\3\r\d\3\f\h\z\r\9\g\7\7\d\s\x\n\c\j\f\q\v\y\q\3\q\y\s\b\6\d\z\9\5\l\4\h\h\m\b\m\p\7\8\w\2\v\l\c\3\2\p\j\t\z\f\6\a\3\c\d\d\l\i\f\t\8\b\7\o\i\d\c\o\c\7\n\d\3\p\m\v\5\h\9\z\c\s\b\7\j\p\j\x\9\d\6\y\6\y\m\6\n\3\b\k\u\a\g\i\a\r\6\b\y\0\6\n\w\m\6\p\b\y\k\m\h\f\g\t\j\o\l\j\d\u\x\4\j\m\r\1\i\v\5\e\m\n\2\f\0\g\8\8\j\0\b\w\y\e\h\f\5\z\i\l\5\w\1\7\k\u\v\m\d\v\o\0\v\b\z\i\a\n\p\e\5\2\y\d\m\a\1\a\p\4\c\g\p\u\0\0\z\s\n\h\m\h\1\g\9\i\i\h\b\y\p\u\b\4\i\j\x\v\p\1\f\9\n\k\o\i\i\t\6\b\q\b\a\w\a\5\k\o\q\f\g\y\c\1\n\e\0\2\2\z\h\3\0\p\g\h\0\z\v\q\b\z\v\2\6\x\4\8\c\1\7\e\c\1\g\5\f\r\f\a\a\e\b\x\z\4\b\3\r\5\m\m\r\2\2\v\u\d\3\e\f\y\z\j\l\f\n\8\d\f\a\2\g\s\r\o\h\w\2\s\s\1\g\f\l\r\c\x\f\e\b\n\k\1\x\i\6\p\d\w\6\n\p\p\d\8\l\4\g\5\y\r\i\s\z\p\2\s\1\9\1\t\c\r\7\q\d\l\z\3\a\v\o\u\e\r\c\4\k\m\r\g\e\q\p\c\6\n\f\v\u\4\i\y\p\e\a\e\1\2\x\k\r\r\2\y\7\w\e\u\7\5\k\3\k\f\s\t\0\v\n\u\4\y\2\g\z\z\4\n\9\4\n\s\1\h\q\3\5\z\8\x\o\b\5\7 ]] 00:06:30.707 12:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:30.707 12:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:30.707 12:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:30.707 12:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:30.707 12:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.707 12:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:30.707 [2024-07-15 12:48:46.520340] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:30.707 [2024-07-15 12:48:46.520441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63215 ] 00:06:30.707 [2024-07-15 12:48:46.657125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.965 [2024-07-15 12:48:46.767352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.965 [2024-07-15 12:48:46.825385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.223  Copying: 512/512 [B] (average 500 kBps) 00:06:31.223 00:06:31.223 12:48:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5hmoamjsonsqyhjt7rgszvj85oppmvbwmdfvgyxichiqp2fv3woyloh9ix2bun36o5hwvo66ha74707ts23x79i0hx3vd0d2384ywqsc8oturo8ullgkevbiqsjsxgret72kkht4t28je03xnllkvz31umahz8vjgg9mic9c5apwzyqdfecs2vf6fw16getjk3rw60wo2j0nlxs0rmqxcuml6fepuek42244few9q8r20eklb1589c6pl80dnhzqawxw3qep0y29sef1x7pnch8rtbhjcehhfyymjbcnlw9gwhd24kzjj442jo5m0gqs2gbrabkg0oh3jlvlempvzlayyokpsmwvlw8thgcfi5s6e4b0wavzwjzixzfcj1vtboix6etqlcewatn2cyq2sh96neu5iu7f32yvp3rkof6xwehs68ufrkv8iukyx2zuzt4bkd5coknk4ess3wk8gq6lqfz92rav2ril3yshz1h2qohjdl80nf4zja09hwwl == \5\h\m\o\a\m\j\s\o\n\s\q\y\h\j\t\7\r\g\s\z\v\j\8\5\o\p\p\m\v\b\w\m\d\f\v\g\y\x\i\c\h\i\q\p\2\f\v\3\w\o\y\l\o\h\9\i\x\2\b\u\n\3\6\o\5\h\w\v\o\6\6\h\a\7\4\7\0\7\t\s\2\3\x\7\9\i\0\h\x\3\v\d\0\d\2\3\8\4\y\w\q\s\c\8\o\t\u\r\o\8\u\l\l\g\k\e\v\b\i\q\s\j\s\x\g\r\e\t\7\2\k\k\h\t\4\t\2\8\j\e\0\3\x\n\l\l\k\v\z\3\1\u\m\a\h\z\8\v\j\g\g\9\m\i\c\9\c\5\a\p\w\z\y\q\d\f\e\c\s\2\v\f\6\f\w\1\6\g\e\t\j\k\3\r\w\6\0\w\o\2\j\0\n\l\x\s\0\r\m\q\x\c\u\m\l\6\f\e\p\u\e\k\4\2\2\4\4\f\e\w\9\q\8\r\2\0\e\k\l\b\1\5\8\9\c\6\p\l\8\0\d\n\h\z\q\a\w\x\w\3\q\e\p\0\y\2\9\s\e\f\1\x\7\p\n\c\h\8\r\t\b\h\j\c\e\h\h\f\y\y\m\j\b\c\n\l\w\9\g\w\h\d\2\4\k\z\j\j\4\4\2\j\o\5\m\0\g\q\s\2\g\b\r\a\b\k\g\0\o\h\3\j\l\v\l\e\m\p\v\z\l\a\y\y\o\k\p\s\m\w\v\l\w\8\t\h\g\c\f\i\5\s\6\e\4\b\0\w\a\v\z\w\j\z\i\x\z\f\c\j\1\v\t\b\o\i\x\6\e\t\q\l\c\e\w\a\t\n\2\c\y\q\2\s\h\9\6\n\e\u\5\i\u\7\f\3\2\y\v\p\3\r\k\o\f\6\x\w\e\h\s\6\8\u\f\r\k\v\8\i\u\k\y\x\2\z\u\z\t\4\b\k\d\5\c\o\k\n\k\4\e\s\s\3\w\k\8\g\q\6\l\q\f\z\9\2\r\a\v\2\r\i\l\3\y\s\h\z\1\h\2\q\o\h\j\d\l\8\0\n\f\4\z\j\a\0\9\h\w\w\l ]] 00:06:31.223 12:48:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.223 12:48:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:31.223 [2024-07-15 12:48:47.111287] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:31.223 [2024-07-15 12:48:47.111423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63219 ] 00:06:31.223 [2024-07-15 12:48:47.249686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.481 [2024-07-15 12:48:47.332761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.481 [2024-07-15 12:48:47.387657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.738  Copying: 512/512 [B] (average 500 kBps) 00:06:31.738 00:06:31.738 12:48:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5hmoamjsonsqyhjt7rgszvj85oppmvbwmdfvgyxichiqp2fv3woyloh9ix2bun36o5hwvo66ha74707ts23x79i0hx3vd0d2384ywqsc8oturo8ullgkevbiqsjsxgret72kkht4t28je03xnllkvz31umahz8vjgg9mic9c5apwzyqdfecs2vf6fw16getjk3rw60wo2j0nlxs0rmqxcuml6fepuek42244few9q8r20eklb1589c6pl80dnhzqawxw3qep0y29sef1x7pnch8rtbhjcehhfyymjbcnlw9gwhd24kzjj442jo5m0gqs2gbrabkg0oh3jlvlempvzlayyokpsmwvlw8thgcfi5s6e4b0wavzwjzixzfcj1vtboix6etqlcewatn2cyq2sh96neu5iu7f32yvp3rkof6xwehs68ufrkv8iukyx2zuzt4bkd5coknk4ess3wk8gq6lqfz92rav2ril3yshz1h2qohjdl80nf4zja09hwwl == \5\h\m\o\a\m\j\s\o\n\s\q\y\h\j\t\7\r\g\s\z\v\j\8\5\o\p\p\m\v\b\w\m\d\f\v\g\y\x\i\c\h\i\q\p\2\f\v\3\w\o\y\l\o\h\9\i\x\2\b\u\n\3\6\o\5\h\w\v\o\6\6\h\a\7\4\7\0\7\t\s\2\3\x\7\9\i\0\h\x\3\v\d\0\d\2\3\8\4\y\w\q\s\c\8\o\t\u\r\o\8\u\l\l\g\k\e\v\b\i\q\s\j\s\x\g\r\e\t\7\2\k\k\h\t\4\t\2\8\j\e\0\3\x\n\l\l\k\v\z\3\1\u\m\a\h\z\8\v\j\g\g\9\m\i\c\9\c\5\a\p\w\z\y\q\d\f\e\c\s\2\v\f\6\f\w\1\6\g\e\t\j\k\3\r\w\6\0\w\o\2\j\0\n\l\x\s\0\r\m\q\x\c\u\m\l\6\f\e\p\u\e\k\4\2\2\4\4\f\e\w\9\q\8\r\2\0\e\k\l\b\1\5\8\9\c\6\p\l\8\0\d\n\h\z\q\a\w\x\w\3\q\e\p\0\y\2\9\s\e\f\1\x\7\p\n\c\h\8\r\t\b\h\j\c\e\h\h\f\y\y\m\j\b\c\n\l\w\9\g\w\h\d\2\4\k\z\j\j\4\4\2\j\o\5\m\0\g\q\s\2\g\b\r\a\b\k\g\0\o\h\3\j\l\v\l\e\m\p\v\z\l\a\y\y\o\k\p\s\m\w\v\l\w\8\t\h\g\c\f\i\5\s\6\e\4\b\0\w\a\v\z\w\j\z\i\x\z\f\c\j\1\v\t\b\o\i\x\6\e\t\q\l\c\e\w\a\t\n\2\c\y\q\2\s\h\9\6\n\e\u\5\i\u\7\f\3\2\y\v\p\3\r\k\o\f\6\x\w\e\h\s\6\8\u\f\r\k\v\8\i\u\k\y\x\2\z\u\z\t\4\b\k\d\5\c\o\k\n\k\4\e\s\s\3\w\k\8\g\q\6\l\q\f\z\9\2\r\a\v\2\r\i\l\3\y\s\h\z\1\h\2\q\o\h\j\d\l\8\0\n\f\4\z\j\a\0\9\h\w\w\l ]] 00:06:31.738 12:48:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.738 12:48:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:31.738 [2024-07-15 12:48:47.688428] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:31.738 [2024-07-15 12:48:47.688557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63234 ] 00:06:31.998 [2024-07-15 12:48:47.823453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.998 [2024-07-15 12:48:47.931161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.998 [2024-07-15 12:48:47.987586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.260  Copying: 512/512 [B] (average 500 kBps) 00:06:32.260 00:06:32.260 12:48:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5hmoamjsonsqyhjt7rgszvj85oppmvbwmdfvgyxichiqp2fv3woyloh9ix2bun36o5hwvo66ha74707ts23x79i0hx3vd0d2384ywqsc8oturo8ullgkevbiqsjsxgret72kkht4t28je03xnllkvz31umahz8vjgg9mic9c5apwzyqdfecs2vf6fw16getjk3rw60wo2j0nlxs0rmqxcuml6fepuek42244few9q8r20eklb1589c6pl80dnhzqawxw3qep0y29sef1x7pnch8rtbhjcehhfyymjbcnlw9gwhd24kzjj442jo5m0gqs2gbrabkg0oh3jlvlempvzlayyokpsmwvlw8thgcfi5s6e4b0wavzwjzixzfcj1vtboix6etqlcewatn2cyq2sh96neu5iu7f32yvp3rkof6xwehs68ufrkv8iukyx2zuzt4bkd5coknk4ess3wk8gq6lqfz92rav2ril3yshz1h2qohjdl80nf4zja09hwwl == \5\h\m\o\a\m\j\s\o\n\s\q\y\h\j\t\7\r\g\s\z\v\j\8\5\o\p\p\m\v\b\w\m\d\f\v\g\y\x\i\c\h\i\q\p\2\f\v\3\w\o\y\l\o\h\9\i\x\2\b\u\n\3\6\o\5\h\w\v\o\6\6\h\a\7\4\7\0\7\t\s\2\3\x\7\9\i\0\h\x\3\v\d\0\d\2\3\8\4\y\w\q\s\c\8\o\t\u\r\o\8\u\l\l\g\k\e\v\b\i\q\s\j\s\x\g\r\e\t\7\2\k\k\h\t\4\t\2\8\j\e\0\3\x\n\l\l\k\v\z\3\1\u\m\a\h\z\8\v\j\g\g\9\m\i\c\9\c\5\a\p\w\z\y\q\d\f\e\c\s\2\v\f\6\f\w\1\6\g\e\t\j\k\3\r\w\6\0\w\o\2\j\0\n\l\x\s\0\r\m\q\x\c\u\m\l\6\f\e\p\u\e\k\4\2\2\4\4\f\e\w\9\q\8\r\2\0\e\k\l\b\1\5\8\9\c\6\p\l\8\0\d\n\h\z\q\a\w\x\w\3\q\e\p\0\y\2\9\s\e\f\1\x\7\p\n\c\h\8\r\t\b\h\j\c\e\h\h\f\y\y\m\j\b\c\n\l\w\9\g\w\h\d\2\4\k\z\j\j\4\4\2\j\o\5\m\0\g\q\s\2\g\b\r\a\b\k\g\0\o\h\3\j\l\v\l\e\m\p\v\z\l\a\y\y\o\k\p\s\m\w\v\l\w\8\t\h\g\c\f\i\5\s\6\e\4\b\0\w\a\v\z\w\j\z\i\x\z\f\c\j\1\v\t\b\o\i\x\6\e\t\q\l\c\e\w\a\t\n\2\c\y\q\2\s\h\9\6\n\e\u\5\i\u\7\f\3\2\y\v\p\3\r\k\o\f\6\x\w\e\h\s\6\8\u\f\r\k\v\8\i\u\k\y\x\2\z\u\z\t\4\b\k\d\5\c\o\k\n\k\4\e\s\s\3\w\k\8\g\q\6\l\q\f\z\9\2\r\a\v\2\r\i\l\3\y\s\h\z\1\h\2\q\o\h\j\d\l\8\0\n\f\4\z\j\a\0\9\h\w\w\l ]] 00:06:32.260 12:48:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.260 12:48:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:32.260 [2024-07-15 12:48:48.274935] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:32.260 [2024-07-15 12:48:48.275042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:06:32.525 [2024-07-15 12:48:48.404996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.525 [2024-07-15 12:48:48.507190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.525 [2024-07-15 12:48:48.563497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.791  Copying: 512/512 [B] (average 500 kBps) 00:06:32.791 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5hmoamjsonsqyhjt7rgszvj85oppmvbwmdfvgyxichiqp2fv3woyloh9ix2bun36o5hwvo66ha74707ts23x79i0hx3vd0d2384ywqsc8oturo8ullgkevbiqsjsxgret72kkht4t28je03xnllkvz31umahz8vjgg9mic9c5apwzyqdfecs2vf6fw16getjk3rw60wo2j0nlxs0rmqxcuml6fepuek42244few9q8r20eklb1589c6pl80dnhzqawxw3qep0y29sef1x7pnch8rtbhjcehhfyymjbcnlw9gwhd24kzjj442jo5m0gqs2gbrabkg0oh3jlvlempvzlayyokpsmwvlw8thgcfi5s6e4b0wavzwjzixzfcj1vtboix6etqlcewatn2cyq2sh96neu5iu7f32yvp3rkof6xwehs68ufrkv8iukyx2zuzt4bkd5coknk4ess3wk8gq6lqfz92rav2ril3yshz1h2qohjdl80nf4zja09hwwl == \5\h\m\o\a\m\j\s\o\n\s\q\y\h\j\t\7\r\g\s\z\v\j\8\5\o\p\p\m\v\b\w\m\d\f\v\g\y\x\i\c\h\i\q\p\2\f\v\3\w\o\y\l\o\h\9\i\x\2\b\u\n\3\6\o\5\h\w\v\o\6\6\h\a\7\4\7\0\7\t\s\2\3\x\7\9\i\0\h\x\3\v\d\0\d\2\3\8\4\y\w\q\s\c\8\o\t\u\r\o\8\u\l\l\g\k\e\v\b\i\q\s\j\s\x\g\r\e\t\7\2\k\k\h\t\4\t\2\8\j\e\0\3\x\n\l\l\k\v\z\3\1\u\m\a\h\z\8\v\j\g\g\9\m\i\c\9\c\5\a\p\w\z\y\q\d\f\e\c\s\2\v\f\6\f\w\1\6\g\e\t\j\k\3\r\w\6\0\w\o\2\j\0\n\l\x\s\0\r\m\q\x\c\u\m\l\6\f\e\p\u\e\k\4\2\2\4\4\f\e\w\9\q\8\r\2\0\e\k\l\b\1\5\8\9\c\6\p\l\8\0\d\n\h\z\q\a\w\x\w\3\q\e\p\0\y\2\9\s\e\f\1\x\7\p\n\c\h\8\r\t\b\h\j\c\e\h\h\f\y\y\m\j\b\c\n\l\w\9\g\w\h\d\2\4\k\z\j\j\4\4\2\j\o\5\m\0\g\q\s\2\g\b\r\a\b\k\g\0\o\h\3\j\l\v\l\e\m\p\v\z\l\a\y\y\o\k\p\s\m\w\v\l\w\8\t\h\g\c\f\i\5\s\6\e\4\b\0\w\a\v\z\w\j\z\i\x\z\f\c\j\1\v\t\b\o\i\x\6\e\t\q\l\c\e\w\a\t\n\2\c\y\q\2\s\h\9\6\n\e\u\5\i\u\7\f\3\2\y\v\p\3\r\k\o\f\6\x\w\e\h\s\6\8\u\f\r\k\v\8\i\u\k\y\x\2\z\u\z\t\4\b\k\d\5\c\o\k\n\k\4\e\s\s\3\w\k\8\g\q\6\l\q\f\z\9\2\r\a\v\2\r\i\l\3\y\s\h\z\1\h\2\q\o\h\j\d\l\8\0\n\f\4\z\j\a\0\9\h\w\w\l ]] 00:06:32.791 00:06:32.791 real 0m5.249s 00:06:32.791 user 0m3.051s 00:06:32.791 sys 0m2.501s 00:06:32.791 ************************************ 00:06:32.791 END TEST dd_flags_misc 00:06:32.791 ************************************ 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:32.791 * Second test run, disabling liburing, forcing AIO 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.791 12:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.057 ************************************ 00:06:33.057 START TEST dd_flag_append_forced_aio 00:06:33.057 ************************************ 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=nvfm9ozgdacbx1b83hno5gu6prfakaoo 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.057 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=yzqw1vf2q4ia64euci1daizeits2scr2 00:06:33.058 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s nvfm9ozgdacbx1b83hno5gu6prfakaoo 00:06:33.058 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s yzqw1vf2q4ia64euci1daizeits2scr2 00:06:33.058 12:48:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:33.058 [2024-07-15 12:48:48.918758] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:33.058 [2024-07-15 12:48:48.918861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63272 ] 00:06:33.058 [2024-07-15 12:48:49.057629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.325 [2024-07-15 12:48:49.157001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.325 [2024-07-15 12:48:49.212228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.595  Copying: 32/32 [B] (average 31 kBps) 00:06:33.595 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ yzqw1vf2q4ia64euci1daizeits2scr2nvfm9ozgdacbx1b83hno5gu6prfakaoo == \y\z\q\w\1\v\f\2\q\4\i\a\6\4\e\u\c\i\1\d\a\i\z\e\i\t\s\2\s\c\r\2\n\v\f\m\9\o\z\g\d\a\c\b\x\1\b\8\3\h\n\o\5\g\u\6\p\r\f\a\k\a\o\o ]] 00:06:33.595 00:06:33.595 real 0m0.641s 00:06:33.595 user 0m0.372s 00:06:33.595 sys 0m0.145s 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.595 ************************************ 00:06:33.595 END TEST dd_flag_append_forced_aio 00:06:33.595 ************************************ 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.595 ************************************ 00:06:33.595 START TEST dd_flag_directory_forced_aio 00:06:33.595 ************************************ 00:06:33.595 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.596 12:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.596 [2024-07-15 12:48:49.596237] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:33.596 [2024-07-15 12:48:49.596338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63304 ] 00:06:33.868 [2024-07-15 12:48:49.725449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.868 [2024-07-15 12:48:49.825040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.868 [2024-07-15 12:48:49.879885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.868 [2024-07-15 12:48:49.910409] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.868 [2024-07-15 12:48:49.910488] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.868 [2024-07-15 12:48:49.910502] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.129 [2024-07-15 12:48:50.018971] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.129 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.129 [2024-07-15 12:48:50.157575] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:34.129 [2024-07-15 12:48:50.157682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63308 ] 00:06:34.386 [2024-07-15 12:48:50.288193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.386 [2024-07-15 12:48:50.399183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.643 [2024-07-15 12:48:50.455173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.643 [2024-07-15 12:48:50.488543] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.643 [2024-07-15 12:48:50.488610] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.643 [2024-07-15 12:48:50.488624] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.643 [2024-07-15 12:48:50.603790] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.643 00:06:34.643 real 0m1.151s 00:06:34.643 user 0m0.657s 00:06:34.643 sys 0m0.286s 00:06:34.643 ************************************ 00:06:34.643 END TEST dd_flag_directory_forced_aio 00:06:34.643 ************************************ 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.643 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.901 ************************************ 00:06:34.901 START TEST dd_flag_nofollow_forced_aio 00:06:34.901 ************************************ 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.901 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.902 12:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.902 [2024-07-15 12:48:50.820629] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:34.902 [2024-07-15 12:48:50.821311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63342 ] 00:06:35.160 [2024-07-15 12:48:50.966356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.160 [2024-07-15 12:48:51.048837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.160 [2024-07-15 12:48:51.105527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.160 [2024-07-15 12:48:51.136882] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.160 [2024-07-15 12:48:51.136946] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.160 [2024-07-15 12:48:51.136962] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.418 [2024-07-15 12:48:51.252048] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.418 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.418 [2024-07-15 12:48:51.406469] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:35.418 [2024-07-15 12:48:51.406558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63352 ] 00:06:35.676 [2024-07-15 12:48:51.546798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.676 [2024-07-15 12:48:51.650635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.676 [2024-07-15 12:48:51.706272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.935 [2024-07-15 12:48:51.741712] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.935 [2024-07-15 12:48:51.741763] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.935 [2024-07-15 12:48:51.741779] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.935 [2024-07-15 12:48:51.855583] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:35.935 12:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.193 [2024-07-15 12:48:52.013189] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:36.193 [2024-07-15 12:48:52.013312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:06:36.193 [2024-07-15 12:48:52.144015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.193 [2024-07-15 12:48:52.244269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.452 [2024-07-15 12:48:52.298040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.710  Copying: 512/512 [B] (average 500 kBps) 00:06:36.710 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 2nag1xoilmfhmer6q3q5rotwppcbm5kadxb84rt333gcf2m7yq9jiwjngkd4takuxjrd8jda5coh7zndmi9hois7l22ouo9c11lxqxb4z7qt4roiexid4546ps7tjmj1p7o8jozpgj7ri7b7aecz5c4n5ra5jsikni93rtxul01ikux9di353m72z5pnjpevqf0yrgxa93h7rayatp4q6y8zt64lj00ar7hrg5o1ackepv3w2dagp02a00q40elmkx6qwbv60e4sqi43rvuu1zr7ea0d11maqwdh8psvdvrgplmkn8mbfumta92azfnzu8mkf23mscd7vzne4x4t88h254u0hndai98orfmn5gabdlym2pfir3d5f0dizuvntfeqbn8208o0otvi01a7qecofx78ho9ahdqqzwz0vtomzpq21c627yecj27qj0gs2q5e21snxq3jvhve69awkli333u3lbftrku7pmsryc6gp45k4n8hzohx0f2sovk3 == \2\n\a\g\1\x\o\i\l\m\f\h\m\e\r\6\q\3\q\5\r\o\t\w\p\p\c\b\m\5\k\a\d\x\b\8\4\r\t\3\3\3\g\c\f\2\m\7\y\q\9\j\i\w\j\n\g\k\d\4\t\a\k\u\x\j\r\d\8\j\d\a\5\c\o\h\7\z\n\d\m\i\9\h\o\i\s\7\l\2\2\o\u\o\9\c\1\1\l\x\q\x\b\4\z\7\q\t\4\r\o\i\e\x\i\d\4\5\4\6\p\s\7\t\j\m\j\1\p\7\o\8\j\o\z\p\g\j\7\r\i\7\b\7\a\e\c\z\5\c\4\n\5\r\a\5\j\s\i\k\n\i\9\3\r\t\x\u\l\0\1\i\k\u\x\9\d\i\3\5\3\m\7\2\z\5\p\n\j\p\e\v\q\f\0\y\r\g\x\a\9\3\h\7\r\a\y\a\t\p\4\q\6\y\8\z\t\6\4\l\j\0\0\a\r\7\h\r\g\5\o\1\a\c\k\e\p\v\3\w\2\d\a\g\p\0\2\a\0\0\q\4\0\e\l\m\k\x\6\q\w\b\v\6\0\e\4\s\q\i\4\3\r\v\u\u\1\z\r\7\e\a\0\d\1\1\m\a\q\w\d\h\8\p\s\v\d\v\r\g\p\l\m\k\n\8\m\b\f\u\m\t\a\9\2\a\z\f\n\z\u\8\m\k\f\2\3\m\s\c\d\7\v\z\n\e\4\x\4\t\8\8\h\2\5\4\u\0\h\n\d\a\i\9\8\o\r\f\m\n\5\g\a\b\d\l\y\m\2\p\f\i\r\3\d\5\f\0\d\i\z\u\v\n\t\f\e\q\b\n\8\2\0\8\o\0\o\t\v\i\0\1\a\7\q\e\c\o\f\x\7\8\h\o\9\a\h\d\q\q\z\w\z\0\v\t\o\m\z\p\q\2\1\c\6\2\7\y\e\c\j\2\7\q\j\0\g\s\2\q\5\e\2\1\s\n\x\q\3\j\v\h\v\e\6\9\a\w\k\l\i\3\3\3\u\3\l\b\f\t\r\k\u\7\p\m\s\r\y\c\6\g\p\4\5\k\4\n\8\h\z\o\h\x\0\f\2\s\o\v\k\3 ]] 00:06:36.710 00:06:36.710 real 0m1.811s 00:06:36.710 user 0m1.027s 00:06:36.710 sys 0m0.449s 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.710 ************************************ 00:06:36.710 END TEST dd_flag_nofollow_forced_aio 00:06:36.710 ************************************ 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.710 ************************************ 00:06:36.710 START TEST dd_flag_noatime_forced_aio 00:06:36.710 ************************************ 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721047732 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721047732 00:06:36.710 12:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:37.645 12:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.645 [2024-07-15 12:48:53.682815] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:37.645 [2024-07-15 12:48:53.682904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63405 ] 00:06:37.964 [2024-07-15 12:48:53.813464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.964 [2024-07-15 12:48:53.907363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.964 [2024-07-15 12:48:53.962271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.221  Copying: 512/512 [B] (average 500 kBps) 00:06:38.221 00:06:38.221 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.221 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721047732 )) 00:06:38.221 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.221 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721047732 )) 00:06:38.221 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.480 [2024-07-15 12:48:54.294536] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:38.480 [2024-07-15 12:48:54.294633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63417 ] 00:06:38.480 [2024-07-15 12:48:54.430605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.480 [2024-07-15 12:48:54.529714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.739 [2024-07-15 12:48:54.589468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.998  Copying: 512/512 [B] (average 500 kBps) 00:06:38.998 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721047734 )) 00:06:38.998 00:06:38.998 real 0m2.246s 00:06:38.998 user 0m0.693s 00:06:38.998 sys 0m0.310s 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.998 ************************************ 00:06:38.998 END TEST dd_flag_noatime_forced_aio 00:06:38.998 ************************************ 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:38.998 ************************************ 00:06:38.998 START TEST dd_flags_misc_forced_aio 00:06:38.998 ************************************ 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.998 12:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:38.998 [2024-07-15 12:48:54.969738] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:38.998 [2024-07-15 12:48:54.969846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63443 ] 00:06:39.257 [2024-07-15 12:48:55.106721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.257 [2024-07-15 12:48:55.211108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.257 [2024-07-15 12:48:55.267444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.515  Copying: 512/512 [B] (average 500 kBps) 00:06:39.515 00:06:39.515 12:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jp6zbr5ag0nte0ub07uhgr0p66v50n4lpyecijjc10g3tlmgik5bmqf4e7f66aqr0w00mfeqfv8hdf4me9l0dk9xil8ivpm294w925wdabupqm87zztoz7owustn1pjbkrkare6ig4p0wslbmnem5uyuyj921h77jsplllk3uxfewkm1qxwxqwbjttpsbboxl7ko1dh6hg0d40ygh7qtr7vbrccw08y26wdao6f8hsfhcp4uqsvywpvc3acvuaecbz2wm6au2jprmkgrx8vals5ccn43h64u2w1klcznfmrls4gb2144fagikjqsfhyrto28grz9415wgmcva6js14f1hcp6uv90mxvll9x8nru2a5qoyg0tubf3cki0zhso9n0t3d16ycz7m6nu6h9i91dblptsxz278k8zaqfy8eaptttsjm475wrgo9y9i9e1dmadbi2e5bbfqxg94nz7ju5dlx3ty3ithy5bln92yqxjllyv7vttn124mbwvfyug == \j\p\6\z\b\r\5\a\g\0\n\t\e\0\u\b\0\7\u\h\g\r\0\p\6\6\v\5\0\n\4\l\p\y\e\c\i\j\j\c\1\0\g\3\t\l\m\g\i\k\5\b\m\q\f\4\e\7\f\6\6\a\q\r\0\w\0\0\m\f\e\q\f\v\8\h\d\f\4\m\e\9\l\0\d\k\9\x\i\l\8\i\v\p\m\2\9\4\w\9\2\5\w\d\a\b\u\p\q\m\8\7\z\z\t\o\z\7\o\w\u\s\t\n\1\p\j\b\k\r\k\a\r\e\6\i\g\4\p\0\w\s\l\b\m\n\e\m\5\u\y\u\y\j\9\2\1\h\7\7\j\s\p\l\l\l\k\3\u\x\f\e\w\k\m\1\q\x\w\x\q\w\b\j\t\t\p\s\b\b\o\x\l\7\k\o\1\d\h\6\h\g\0\d\4\0\y\g\h\7\q\t\r\7\v\b\r\c\c\w\0\8\y\2\6\w\d\a\o\6\f\8\h\s\f\h\c\p\4\u\q\s\v\y\w\p\v\c\3\a\c\v\u\a\e\c\b\z\2\w\m\6\a\u\2\j\p\r\m\k\g\r\x\8\v\a\l\s\5\c\c\n\4\3\h\6\4\u\2\w\1\k\l\c\z\n\f\m\r\l\s\4\g\b\2\1\4\4\f\a\g\i\k\j\q\s\f\h\y\r\t\o\2\8\g\r\z\9\4\1\5\w\g\m\c\v\a\6\j\s\1\4\f\1\h\c\p\6\u\v\9\0\m\x\v\l\l\9\x\8\n\r\u\2\a\5\q\o\y\g\0\t\u\b\f\3\c\k\i\0\z\h\s\o\9\n\0\t\3\d\1\6\y\c\z\7\m\6\n\u\6\h\9\i\9\1\d\b\l\p\t\s\x\z\2\7\8\k\8\z\a\q\f\y\8\e\a\p\t\t\t\s\j\m\4\7\5\w\r\g\o\9\y\9\i\9\e\1\d\m\a\d\b\i\2\e\5\b\b\f\q\x\g\9\4\n\z\7\j\u\5\d\l\x\3\t\y\3\i\t\h\y\5\b\l\n\9\2\y\q\x\j\l\l\y\v\7\v\t\t\n\1\2\4\m\b\w\v\f\y\u\g ]] 00:06:39.515 12:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.516 12:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:39.774 [2024-07-15 12:48:55.581749] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:39.774 [2024-07-15 12:48:55.581869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63451 ] 00:06:39.774 [2024-07-15 12:48:55.720245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.033 [2024-07-15 12:48:55.835029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.033 [2024-07-15 12:48:55.891723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.292  Copying: 512/512 [B] (average 500 kBps) 00:06:40.292 00:06:40.292 12:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jp6zbr5ag0nte0ub07uhgr0p66v50n4lpyecijjc10g3tlmgik5bmqf4e7f66aqr0w00mfeqfv8hdf4me9l0dk9xil8ivpm294w925wdabupqm87zztoz7owustn1pjbkrkare6ig4p0wslbmnem5uyuyj921h77jsplllk3uxfewkm1qxwxqwbjttpsbboxl7ko1dh6hg0d40ygh7qtr7vbrccw08y26wdao6f8hsfhcp4uqsvywpvc3acvuaecbz2wm6au2jprmkgrx8vals5ccn43h64u2w1klcznfmrls4gb2144fagikjqsfhyrto28grz9415wgmcva6js14f1hcp6uv90mxvll9x8nru2a5qoyg0tubf3cki0zhso9n0t3d16ycz7m6nu6h9i91dblptsxz278k8zaqfy8eaptttsjm475wrgo9y9i9e1dmadbi2e5bbfqxg94nz7ju5dlx3ty3ithy5bln92yqxjllyv7vttn124mbwvfyug == \j\p\6\z\b\r\5\a\g\0\n\t\e\0\u\b\0\7\u\h\g\r\0\p\6\6\v\5\0\n\4\l\p\y\e\c\i\j\j\c\1\0\g\3\t\l\m\g\i\k\5\b\m\q\f\4\e\7\f\6\6\a\q\r\0\w\0\0\m\f\e\q\f\v\8\h\d\f\4\m\e\9\l\0\d\k\9\x\i\l\8\i\v\p\m\2\9\4\w\9\2\5\w\d\a\b\u\p\q\m\8\7\z\z\t\o\z\7\o\w\u\s\t\n\1\p\j\b\k\r\k\a\r\e\6\i\g\4\p\0\w\s\l\b\m\n\e\m\5\u\y\u\y\j\9\2\1\h\7\7\j\s\p\l\l\l\k\3\u\x\f\e\w\k\m\1\q\x\w\x\q\w\b\j\t\t\p\s\b\b\o\x\l\7\k\o\1\d\h\6\h\g\0\d\4\0\y\g\h\7\q\t\r\7\v\b\r\c\c\w\0\8\y\2\6\w\d\a\o\6\f\8\h\s\f\h\c\p\4\u\q\s\v\y\w\p\v\c\3\a\c\v\u\a\e\c\b\z\2\w\m\6\a\u\2\j\p\r\m\k\g\r\x\8\v\a\l\s\5\c\c\n\4\3\h\6\4\u\2\w\1\k\l\c\z\n\f\m\r\l\s\4\g\b\2\1\4\4\f\a\g\i\k\j\q\s\f\h\y\r\t\o\2\8\g\r\z\9\4\1\5\w\g\m\c\v\a\6\j\s\1\4\f\1\h\c\p\6\u\v\9\0\m\x\v\l\l\9\x\8\n\r\u\2\a\5\q\o\y\g\0\t\u\b\f\3\c\k\i\0\z\h\s\o\9\n\0\t\3\d\1\6\y\c\z\7\m\6\n\u\6\h\9\i\9\1\d\b\l\p\t\s\x\z\2\7\8\k\8\z\a\q\f\y\8\e\a\p\t\t\t\s\j\m\4\7\5\w\r\g\o\9\y\9\i\9\e\1\d\m\a\d\b\i\2\e\5\b\b\f\q\x\g\9\4\n\z\7\j\u\5\d\l\x\3\t\y\3\i\t\h\y\5\b\l\n\9\2\y\q\x\j\l\l\y\v\7\v\t\t\n\1\2\4\m\b\w\v\f\y\u\g ]] 00:06:40.292 12:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.292 12:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:40.292 [2024-07-15 12:48:56.201787] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:40.292 [2024-07-15 12:48:56.201862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63458 ] 00:06:40.292 [2024-07-15 12:48:56.336489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.551 [2024-07-15 12:48:56.451950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.551 [2024-07-15 12:48:56.514670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.810  Copying: 512/512 [B] (average 55 kBps) 00:06:40.810 00:06:40.810 12:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jp6zbr5ag0nte0ub07uhgr0p66v50n4lpyecijjc10g3tlmgik5bmqf4e7f66aqr0w00mfeqfv8hdf4me9l0dk9xil8ivpm294w925wdabupqm87zztoz7owustn1pjbkrkare6ig4p0wslbmnem5uyuyj921h77jsplllk3uxfewkm1qxwxqwbjttpsbboxl7ko1dh6hg0d40ygh7qtr7vbrccw08y26wdao6f8hsfhcp4uqsvywpvc3acvuaecbz2wm6au2jprmkgrx8vals5ccn43h64u2w1klcznfmrls4gb2144fagikjqsfhyrto28grz9415wgmcva6js14f1hcp6uv90mxvll9x8nru2a5qoyg0tubf3cki0zhso9n0t3d16ycz7m6nu6h9i91dblptsxz278k8zaqfy8eaptttsjm475wrgo9y9i9e1dmadbi2e5bbfqxg94nz7ju5dlx3ty3ithy5bln92yqxjllyv7vttn124mbwvfyug == \j\p\6\z\b\r\5\a\g\0\n\t\e\0\u\b\0\7\u\h\g\r\0\p\6\6\v\5\0\n\4\l\p\y\e\c\i\j\j\c\1\0\g\3\t\l\m\g\i\k\5\b\m\q\f\4\e\7\f\6\6\a\q\r\0\w\0\0\m\f\e\q\f\v\8\h\d\f\4\m\e\9\l\0\d\k\9\x\i\l\8\i\v\p\m\2\9\4\w\9\2\5\w\d\a\b\u\p\q\m\8\7\z\z\t\o\z\7\o\w\u\s\t\n\1\p\j\b\k\r\k\a\r\e\6\i\g\4\p\0\w\s\l\b\m\n\e\m\5\u\y\u\y\j\9\2\1\h\7\7\j\s\p\l\l\l\k\3\u\x\f\e\w\k\m\1\q\x\w\x\q\w\b\j\t\t\p\s\b\b\o\x\l\7\k\o\1\d\h\6\h\g\0\d\4\0\y\g\h\7\q\t\r\7\v\b\r\c\c\w\0\8\y\2\6\w\d\a\o\6\f\8\h\s\f\h\c\p\4\u\q\s\v\y\w\p\v\c\3\a\c\v\u\a\e\c\b\z\2\w\m\6\a\u\2\j\p\r\m\k\g\r\x\8\v\a\l\s\5\c\c\n\4\3\h\6\4\u\2\w\1\k\l\c\z\n\f\m\r\l\s\4\g\b\2\1\4\4\f\a\g\i\k\j\q\s\f\h\y\r\t\o\2\8\g\r\z\9\4\1\5\w\g\m\c\v\a\6\j\s\1\4\f\1\h\c\p\6\u\v\9\0\m\x\v\l\l\9\x\8\n\r\u\2\a\5\q\o\y\g\0\t\u\b\f\3\c\k\i\0\z\h\s\o\9\n\0\t\3\d\1\6\y\c\z\7\m\6\n\u\6\h\9\i\9\1\d\b\l\p\t\s\x\z\2\7\8\k\8\z\a\q\f\y\8\e\a\p\t\t\t\s\j\m\4\7\5\w\r\g\o\9\y\9\i\9\e\1\d\m\a\d\b\i\2\e\5\b\b\f\q\x\g\9\4\n\z\7\j\u\5\d\l\x\3\t\y\3\i\t\h\y\5\b\l\n\9\2\y\q\x\j\l\l\y\v\7\v\t\t\n\1\2\4\m\b\w\v\f\y\u\g ]] 00:06:40.810 12:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.810 12:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:41.069 [2024-07-15 12:48:56.880651] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:41.069 [2024-07-15 12:48:56.880777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63471 ] 00:06:41.069 [2024-07-15 12:48:57.019090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.069 [2024-07-15 12:48:57.105916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.328 [2024-07-15 12:48:57.160476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.587  Copying: 512/512 [B] (average 500 kBps) 00:06:41.587 00:06:41.587 12:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jp6zbr5ag0nte0ub07uhgr0p66v50n4lpyecijjc10g3tlmgik5bmqf4e7f66aqr0w00mfeqfv8hdf4me9l0dk9xil8ivpm294w925wdabupqm87zztoz7owustn1pjbkrkare6ig4p0wslbmnem5uyuyj921h77jsplllk3uxfewkm1qxwxqwbjttpsbboxl7ko1dh6hg0d40ygh7qtr7vbrccw08y26wdao6f8hsfhcp4uqsvywpvc3acvuaecbz2wm6au2jprmkgrx8vals5ccn43h64u2w1klcznfmrls4gb2144fagikjqsfhyrto28grz9415wgmcva6js14f1hcp6uv90mxvll9x8nru2a5qoyg0tubf3cki0zhso9n0t3d16ycz7m6nu6h9i91dblptsxz278k8zaqfy8eaptttsjm475wrgo9y9i9e1dmadbi2e5bbfqxg94nz7ju5dlx3ty3ithy5bln92yqxjllyv7vttn124mbwvfyug == \j\p\6\z\b\r\5\a\g\0\n\t\e\0\u\b\0\7\u\h\g\r\0\p\6\6\v\5\0\n\4\l\p\y\e\c\i\j\j\c\1\0\g\3\t\l\m\g\i\k\5\b\m\q\f\4\e\7\f\6\6\a\q\r\0\w\0\0\m\f\e\q\f\v\8\h\d\f\4\m\e\9\l\0\d\k\9\x\i\l\8\i\v\p\m\2\9\4\w\9\2\5\w\d\a\b\u\p\q\m\8\7\z\z\t\o\z\7\o\w\u\s\t\n\1\p\j\b\k\r\k\a\r\e\6\i\g\4\p\0\w\s\l\b\m\n\e\m\5\u\y\u\y\j\9\2\1\h\7\7\j\s\p\l\l\l\k\3\u\x\f\e\w\k\m\1\q\x\w\x\q\w\b\j\t\t\p\s\b\b\o\x\l\7\k\o\1\d\h\6\h\g\0\d\4\0\y\g\h\7\q\t\r\7\v\b\r\c\c\w\0\8\y\2\6\w\d\a\o\6\f\8\h\s\f\h\c\p\4\u\q\s\v\y\w\p\v\c\3\a\c\v\u\a\e\c\b\z\2\w\m\6\a\u\2\j\p\r\m\k\g\r\x\8\v\a\l\s\5\c\c\n\4\3\h\6\4\u\2\w\1\k\l\c\z\n\f\m\r\l\s\4\g\b\2\1\4\4\f\a\g\i\k\j\q\s\f\h\y\r\t\o\2\8\g\r\z\9\4\1\5\w\g\m\c\v\a\6\j\s\1\4\f\1\h\c\p\6\u\v\9\0\m\x\v\l\l\9\x\8\n\r\u\2\a\5\q\o\y\g\0\t\u\b\f\3\c\k\i\0\z\h\s\o\9\n\0\t\3\d\1\6\y\c\z\7\m\6\n\u\6\h\9\i\9\1\d\b\l\p\t\s\x\z\2\7\8\k\8\z\a\q\f\y\8\e\a\p\t\t\t\s\j\m\4\7\5\w\r\g\o\9\y\9\i\9\e\1\d\m\a\d\b\i\2\e\5\b\b\f\q\x\g\9\4\n\z\7\j\u\5\d\l\x\3\t\y\3\i\t\h\y\5\b\l\n\9\2\y\q\x\j\l\l\y\v\7\v\t\t\n\1\2\4\m\b\w\v\f\y\u\g ]] 00:06:41.587 12:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:41.587 12:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:41.587 12:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:41.587 12:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:41.587 12:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.587 12:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:41.587 [2024-07-15 12:48:57.483991] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:41.587 [2024-07-15 12:48:57.484072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63479 ] 00:06:41.587 [2024-07-15 12:48:57.617191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.846 [2024-07-15 12:48:57.712624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.846 [2024-07-15 12:48:57.768931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.105  Copying: 512/512 [B] (average 500 kBps) 00:06:42.105 00:06:42.106 12:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zontx2in6pqqau2ygq45j9p96yd3zwkthpbrlibd7yku3azbie1qc5kyk5aua9krhzxs0s7y2shbkzzoettzekxfmwybl7uwa0l1g9pyc35f04zxxqv3nc0k30am0m6037zurwgk54xqmcsfwcsawd0s5br8v2fwm1mabbpy73vvrrzf1nb7ie53fovlqlmksnukrhdusqo135qmq9ofl6ax90fa25xjd3utmjh5bskbjvxkebxydgu435qajkum557evmb66aub06cqtbrfctgutiuj5gjttru07ajw9maavt802qc3639hp30ssmmjtrsaw9q4x31dgxs0m97b9s2g4wwkguqt4w0ogpdxh5vzqmu97uhd8aqx9xbcnf8lsrsf46u8fo4rnbkuz9bpg1dhmhxkdbtk8xrrfayhr7smah4lgzt3p8aqdwgrpsbaxikba77llwl28gbmvobxyh0h4fycqquyyvao8ki3eshsnd19cja6xnkim7qxkof9 == \z\o\n\t\x\2\i\n\6\p\q\q\a\u\2\y\g\q\4\5\j\9\p\9\6\y\d\3\z\w\k\t\h\p\b\r\l\i\b\d\7\y\k\u\3\a\z\b\i\e\1\q\c\5\k\y\k\5\a\u\a\9\k\r\h\z\x\s\0\s\7\y\2\s\h\b\k\z\z\o\e\t\t\z\e\k\x\f\m\w\y\b\l\7\u\w\a\0\l\1\g\9\p\y\c\3\5\f\0\4\z\x\x\q\v\3\n\c\0\k\3\0\a\m\0\m\6\0\3\7\z\u\r\w\g\k\5\4\x\q\m\c\s\f\w\c\s\a\w\d\0\s\5\b\r\8\v\2\f\w\m\1\m\a\b\b\p\y\7\3\v\v\r\r\z\f\1\n\b\7\i\e\5\3\f\o\v\l\q\l\m\k\s\n\u\k\r\h\d\u\s\q\o\1\3\5\q\m\q\9\o\f\l\6\a\x\9\0\f\a\2\5\x\j\d\3\u\t\m\j\h\5\b\s\k\b\j\v\x\k\e\b\x\y\d\g\u\4\3\5\q\a\j\k\u\m\5\5\7\e\v\m\b\6\6\a\u\b\0\6\c\q\t\b\r\f\c\t\g\u\t\i\u\j\5\g\j\t\t\r\u\0\7\a\j\w\9\m\a\a\v\t\8\0\2\q\c\3\6\3\9\h\p\3\0\s\s\m\m\j\t\r\s\a\w\9\q\4\x\3\1\d\g\x\s\0\m\9\7\b\9\s\2\g\4\w\w\k\g\u\q\t\4\w\0\o\g\p\d\x\h\5\v\z\q\m\u\9\7\u\h\d\8\a\q\x\9\x\b\c\n\f\8\l\s\r\s\f\4\6\u\8\f\o\4\r\n\b\k\u\z\9\b\p\g\1\d\h\m\h\x\k\d\b\t\k\8\x\r\r\f\a\y\h\r\7\s\m\a\h\4\l\g\z\t\3\p\8\a\q\d\w\g\r\p\s\b\a\x\i\k\b\a\7\7\l\l\w\l\2\8\g\b\m\v\o\b\x\y\h\0\h\4\f\y\c\q\q\u\y\y\v\a\o\8\k\i\3\e\s\h\s\n\d\1\9\c\j\a\6\x\n\k\i\m\7\q\x\k\o\f\9 ]] 00:06:42.106 12:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.106 12:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:42.106 [2024-07-15 12:48:58.122329] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:42.106 [2024-07-15 12:48:58.122485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63486 ] 00:06:42.365 [2024-07-15 12:48:58.264603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.365 [2024-07-15 12:48:58.373788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.623 [2024-07-15 12:48:58.428793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.881  Copying: 512/512 [B] (average 500 kBps) 00:06:42.881 00:06:42.881 12:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zontx2in6pqqau2ygq45j9p96yd3zwkthpbrlibd7yku3azbie1qc5kyk5aua9krhzxs0s7y2shbkzzoettzekxfmwybl7uwa0l1g9pyc35f04zxxqv3nc0k30am0m6037zurwgk54xqmcsfwcsawd0s5br8v2fwm1mabbpy73vvrrzf1nb7ie53fovlqlmksnukrhdusqo135qmq9ofl6ax90fa25xjd3utmjh5bskbjvxkebxydgu435qajkum557evmb66aub06cqtbrfctgutiuj5gjttru07ajw9maavt802qc3639hp30ssmmjtrsaw9q4x31dgxs0m97b9s2g4wwkguqt4w0ogpdxh5vzqmu97uhd8aqx9xbcnf8lsrsf46u8fo4rnbkuz9bpg1dhmhxkdbtk8xrrfayhr7smah4lgzt3p8aqdwgrpsbaxikba77llwl28gbmvobxyh0h4fycqquyyvao8ki3eshsnd19cja6xnkim7qxkof9 == \z\o\n\t\x\2\i\n\6\p\q\q\a\u\2\y\g\q\4\5\j\9\p\9\6\y\d\3\z\w\k\t\h\p\b\r\l\i\b\d\7\y\k\u\3\a\z\b\i\e\1\q\c\5\k\y\k\5\a\u\a\9\k\r\h\z\x\s\0\s\7\y\2\s\h\b\k\z\z\o\e\t\t\z\e\k\x\f\m\w\y\b\l\7\u\w\a\0\l\1\g\9\p\y\c\3\5\f\0\4\z\x\x\q\v\3\n\c\0\k\3\0\a\m\0\m\6\0\3\7\z\u\r\w\g\k\5\4\x\q\m\c\s\f\w\c\s\a\w\d\0\s\5\b\r\8\v\2\f\w\m\1\m\a\b\b\p\y\7\3\v\v\r\r\z\f\1\n\b\7\i\e\5\3\f\o\v\l\q\l\m\k\s\n\u\k\r\h\d\u\s\q\o\1\3\5\q\m\q\9\o\f\l\6\a\x\9\0\f\a\2\5\x\j\d\3\u\t\m\j\h\5\b\s\k\b\j\v\x\k\e\b\x\y\d\g\u\4\3\5\q\a\j\k\u\m\5\5\7\e\v\m\b\6\6\a\u\b\0\6\c\q\t\b\r\f\c\t\g\u\t\i\u\j\5\g\j\t\t\r\u\0\7\a\j\w\9\m\a\a\v\t\8\0\2\q\c\3\6\3\9\h\p\3\0\s\s\m\m\j\t\r\s\a\w\9\q\4\x\3\1\d\g\x\s\0\m\9\7\b\9\s\2\g\4\w\w\k\g\u\q\t\4\w\0\o\g\p\d\x\h\5\v\z\q\m\u\9\7\u\h\d\8\a\q\x\9\x\b\c\n\f\8\l\s\r\s\f\4\6\u\8\f\o\4\r\n\b\k\u\z\9\b\p\g\1\d\h\m\h\x\k\d\b\t\k\8\x\r\r\f\a\y\h\r\7\s\m\a\h\4\l\g\z\t\3\p\8\a\q\d\w\g\r\p\s\b\a\x\i\k\b\a\7\7\l\l\w\l\2\8\g\b\m\v\o\b\x\y\h\0\h\4\f\y\c\q\q\u\y\y\v\a\o\8\k\i\3\e\s\h\s\n\d\1\9\c\j\a\6\x\n\k\i\m\7\q\x\k\o\f\9 ]] 00:06:42.881 12:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.881 12:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:42.881 [2024-07-15 12:48:58.752103] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:42.881 [2024-07-15 12:48:58.752193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63494 ] 00:06:42.881 [2024-07-15 12:48:58.886495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.140 [2024-07-15 12:48:58.984102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.140 [2024-07-15 12:48:59.039878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.399  Copying: 512/512 [B] (average 500 kBps) 00:06:43.399 00:06:43.399 12:48:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zontx2in6pqqau2ygq45j9p96yd3zwkthpbrlibd7yku3azbie1qc5kyk5aua9krhzxs0s7y2shbkzzoettzekxfmwybl7uwa0l1g9pyc35f04zxxqv3nc0k30am0m6037zurwgk54xqmcsfwcsawd0s5br8v2fwm1mabbpy73vvrrzf1nb7ie53fovlqlmksnukrhdusqo135qmq9ofl6ax90fa25xjd3utmjh5bskbjvxkebxydgu435qajkum557evmb66aub06cqtbrfctgutiuj5gjttru07ajw9maavt802qc3639hp30ssmmjtrsaw9q4x31dgxs0m97b9s2g4wwkguqt4w0ogpdxh5vzqmu97uhd8aqx9xbcnf8lsrsf46u8fo4rnbkuz9bpg1dhmhxkdbtk8xrrfayhr7smah4lgzt3p8aqdwgrpsbaxikba77llwl28gbmvobxyh0h4fycqquyyvao8ki3eshsnd19cja6xnkim7qxkof9 == \z\o\n\t\x\2\i\n\6\p\q\q\a\u\2\y\g\q\4\5\j\9\p\9\6\y\d\3\z\w\k\t\h\p\b\r\l\i\b\d\7\y\k\u\3\a\z\b\i\e\1\q\c\5\k\y\k\5\a\u\a\9\k\r\h\z\x\s\0\s\7\y\2\s\h\b\k\z\z\o\e\t\t\z\e\k\x\f\m\w\y\b\l\7\u\w\a\0\l\1\g\9\p\y\c\3\5\f\0\4\z\x\x\q\v\3\n\c\0\k\3\0\a\m\0\m\6\0\3\7\z\u\r\w\g\k\5\4\x\q\m\c\s\f\w\c\s\a\w\d\0\s\5\b\r\8\v\2\f\w\m\1\m\a\b\b\p\y\7\3\v\v\r\r\z\f\1\n\b\7\i\e\5\3\f\o\v\l\q\l\m\k\s\n\u\k\r\h\d\u\s\q\o\1\3\5\q\m\q\9\o\f\l\6\a\x\9\0\f\a\2\5\x\j\d\3\u\t\m\j\h\5\b\s\k\b\j\v\x\k\e\b\x\y\d\g\u\4\3\5\q\a\j\k\u\m\5\5\7\e\v\m\b\6\6\a\u\b\0\6\c\q\t\b\r\f\c\t\g\u\t\i\u\j\5\g\j\t\t\r\u\0\7\a\j\w\9\m\a\a\v\t\8\0\2\q\c\3\6\3\9\h\p\3\0\s\s\m\m\j\t\r\s\a\w\9\q\4\x\3\1\d\g\x\s\0\m\9\7\b\9\s\2\g\4\w\w\k\g\u\q\t\4\w\0\o\g\p\d\x\h\5\v\z\q\m\u\9\7\u\h\d\8\a\q\x\9\x\b\c\n\f\8\l\s\r\s\f\4\6\u\8\f\o\4\r\n\b\k\u\z\9\b\p\g\1\d\h\m\h\x\k\d\b\t\k\8\x\r\r\f\a\y\h\r\7\s\m\a\h\4\l\g\z\t\3\p\8\a\q\d\w\g\r\p\s\b\a\x\i\k\b\a\7\7\l\l\w\l\2\8\g\b\m\v\o\b\x\y\h\0\h\4\f\y\c\q\q\u\y\y\v\a\o\8\k\i\3\e\s\h\s\n\d\1\9\c\j\a\6\x\n\k\i\m\7\q\x\k\o\f\9 ]] 00:06:43.399 12:48:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.399 12:48:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:43.399 [2024-07-15 12:48:59.344200] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:43.399 [2024-07-15 12:48:59.344289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63501 ] 00:06:43.658 [2024-07-15 12:48:59.477046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.658 [2024-07-15 12:48:59.590471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.658 [2024-07-15 12:48:59.644777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.918  Copying: 512/512 [B] (average 500 kBps) 00:06:43.918 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zontx2in6pqqau2ygq45j9p96yd3zwkthpbrlibd7yku3azbie1qc5kyk5aua9krhzxs0s7y2shbkzzoettzekxfmwybl7uwa0l1g9pyc35f04zxxqv3nc0k30am0m6037zurwgk54xqmcsfwcsawd0s5br8v2fwm1mabbpy73vvrrzf1nb7ie53fovlqlmksnukrhdusqo135qmq9ofl6ax90fa25xjd3utmjh5bskbjvxkebxydgu435qajkum557evmb66aub06cqtbrfctgutiuj5gjttru07ajw9maavt802qc3639hp30ssmmjtrsaw9q4x31dgxs0m97b9s2g4wwkguqt4w0ogpdxh5vzqmu97uhd8aqx9xbcnf8lsrsf46u8fo4rnbkuz9bpg1dhmhxkdbtk8xrrfayhr7smah4lgzt3p8aqdwgrpsbaxikba77llwl28gbmvobxyh0h4fycqquyyvao8ki3eshsnd19cja6xnkim7qxkof9 == \z\o\n\t\x\2\i\n\6\p\q\q\a\u\2\y\g\q\4\5\j\9\p\9\6\y\d\3\z\w\k\t\h\p\b\r\l\i\b\d\7\y\k\u\3\a\z\b\i\e\1\q\c\5\k\y\k\5\a\u\a\9\k\r\h\z\x\s\0\s\7\y\2\s\h\b\k\z\z\o\e\t\t\z\e\k\x\f\m\w\y\b\l\7\u\w\a\0\l\1\g\9\p\y\c\3\5\f\0\4\z\x\x\q\v\3\n\c\0\k\3\0\a\m\0\m\6\0\3\7\z\u\r\w\g\k\5\4\x\q\m\c\s\f\w\c\s\a\w\d\0\s\5\b\r\8\v\2\f\w\m\1\m\a\b\b\p\y\7\3\v\v\r\r\z\f\1\n\b\7\i\e\5\3\f\o\v\l\q\l\m\k\s\n\u\k\r\h\d\u\s\q\o\1\3\5\q\m\q\9\o\f\l\6\a\x\9\0\f\a\2\5\x\j\d\3\u\t\m\j\h\5\b\s\k\b\j\v\x\k\e\b\x\y\d\g\u\4\3\5\q\a\j\k\u\m\5\5\7\e\v\m\b\6\6\a\u\b\0\6\c\q\t\b\r\f\c\t\g\u\t\i\u\j\5\g\j\t\t\r\u\0\7\a\j\w\9\m\a\a\v\t\8\0\2\q\c\3\6\3\9\h\p\3\0\s\s\m\m\j\t\r\s\a\w\9\q\4\x\3\1\d\g\x\s\0\m\9\7\b\9\s\2\g\4\w\w\k\g\u\q\t\4\w\0\o\g\p\d\x\h\5\v\z\q\m\u\9\7\u\h\d\8\a\q\x\9\x\b\c\n\f\8\l\s\r\s\f\4\6\u\8\f\o\4\r\n\b\k\u\z\9\b\p\g\1\d\h\m\h\x\k\d\b\t\k\8\x\r\r\f\a\y\h\r\7\s\m\a\h\4\l\g\z\t\3\p\8\a\q\d\w\g\r\p\s\b\a\x\i\k\b\a\7\7\l\l\w\l\2\8\g\b\m\v\o\b\x\y\h\0\h\4\f\y\c\q\q\u\y\y\v\a\o\8\k\i\3\e\s\h\s\n\d\1\9\c\j\a\6\x\n\k\i\m\7\q\x\k\o\f\9 ]] 00:06:43.918 00:06:43.918 real 0m5.000s 00:06:43.918 user 0m2.828s 00:06:43.918 sys 0m1.170s 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:43.918 ************************************ 00:06:43.918 END TEST dd_flags_misc_forced_aio 00:06:43.918 ************************************ 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:43.918 00:06:43.918 real 0m22.821s 00:06:43.918 user 0m11.869s 00:06:43.918 sys 0m7.005s 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.918 12:48:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:43.918 ************************************ 00:06:43.918 END TEST spdk_dd_posix 00:06:43.918 ************************************ 00:06:44.178 12:48:59 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:44.178 12:48:59 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:44.178 12:48:59 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.178 12:48:59 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.178 12:48:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:44.178 ************************************ 00:06:44.178 START TEST spdk_dd_malloc 00:06:44.178 ************************************ 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:44.178 * Looking for test storage... 00:06:44.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:44.178 ************************************ 00:06:44.178 START TEST dd_malloc_copy 00:06:44.178 ************************************ 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:44.178 12:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.178 [2024-07-15 12:49:00.154588] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:44.178 [2024-07-15 12:49:00.154689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63575 ] 00:06:44.178 { 00:06:44.178 "subsystems": [ 00:06:44.178 { 00:06:44.178 "subsystem": "bdev", 00:06:44.178 "config": [ 00:06:44.178 { 00:06:44.178 "params": { 00:06:44.178 "block_size": 512, 00:06:44.178 "num_blocks": 1048576, 00:06:44.178 "name": "malloc0" 00:06:44.178 }, 00:06:44.178 "method": "bdev_malloc_create" 00:06:44.178 }, 00:06:44.178 { 00:06:44.178 "params": { 00:06:44.178 "block_size": 512, 00:06:44.178 "num_blocks": 1048576, 00:06:44.178 "name": "malloc1" 00:06:44.178 }, 00:06:44.178 "method": "bdev_malloc_create" 00:06:44.178 }, 00:06:44.178 { 00:06:44.178 "method": "bdev_wait_for_examine" 00:06:44.178 } 00:06:44.178 ] 00:06:44.178 } 00:06:44.178 ] 00:06:44.178 } 00:06:44.437 [2024-07-15 12:49:00.290951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.437 [2024-07-15 12:49:00.387324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.437 [2024-07-15 12:49:00.443904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.895  Copying: 204/512 [MB] (204 MBps) Copying: 408/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:06:47.895 00:06:47.895 12:49:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:47.896 12:49:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:47.896 12:49:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.896 12:49:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.896 [2024-07-15 12:49:03.945742] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:47.896 [2024-07-15 12:49:03.945859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63623 ] 00:06:47.896 { 00:06:47.896 "subsystems": [ 00:06:47.896 { 00:06:47.896 "subsystem": "bdev", 00:06:47.896 "config": [ 00:06:47.896 { 00:06:47.896 "params": { 00:06:47.896 "block_size": 512, 00:06:47.896 "num_blocks": 1048576, 00:06:47.896 "name": "malloc0" 00:06:47.896 }, 00:06:47.896 "method": "bdev_malloc_create" 00:06:47.896 }, 00:06:47.896 { 00:06:47.896 "params": { 00:06:47.896 "block_size": 512, 00:06:47.896 "num_blocks": 1048576, 00:06:47.896 "name": "malloc1" 00:06:47.896 }, 00:06:47.896 "method": "bdev_malloc_create" 00:06:47.896 }, 00:06:47.896 { 00:06:47.896 "method": "bdev_wait_for_examine" 00:06:47.896 } 00:06:47.896 ] 00:06:47.896 } 00:06:47.896 ] 00:06:47.896 } 00:06:48.217 [2024-07-15 12:49:04.086248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.217 [2024-07-15 12:49:04.186955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.217 [2024-07-15 12:49:04.240892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.032  Copying: 195/512 [MB] (195 MBps) Copying: 396/512 [MB] (200 MBps) Copying: 512/512 [MB] (average 199 MBps) 00:06:52.032 00:06:52.032 00:06:52.032 real 0m7.642s 00:06:52.032 user 0m6.627s 00:06:52.032 sys 0m0.850s 00:06:52.032 12:49:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.032 ************************************ 00:06:52.032 END TEST dd_malloc_copy 00:06:52.032 ************************************ 00:06:52.032 12:49:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.032 12:49:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.032 00:06:52.032 real 0m7.781s 00:06:52.032 user 0m6.678s 00:06:52.032 sys 0m0.942s 00:06:52.032 12:49:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.032 12:49:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:52.032 ************************************ 00:06:52.032 END TEST spdk_dd_malloc 00:06:52.032 ************************************ 00:06:52.032 12:49:07 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:52.032 12:49:07 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:52.032 12:49:07 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:52.032 12:49:07 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.032 12:49:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:52.032 ************************************ 00:06:52.032 START TEST spdk_dd_bdev_to_bdev 00:06:52.032 ************************************ 00:06:52.032 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:52.032 * Looking for test storage... 00:06:52.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:52.032 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.032 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.032 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.032 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.032 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.032 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.033 ************************************ 00:06:52.033 START TEST dd_inflate_file 00:06:52.033 ************************************ 00:06:52.033 12:49:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:52.033 [2024-07-15 12:49:07.991636] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:52.033 [2024-07-15 12:49:07.991750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63733 ] 00:06:52.291 [2024-07-15 12:49:08.131193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.291 [2024-07-15 12:49:08.236348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.291 [2024-07-15 12:49:08.290018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.550  Copying: 64/64 [MB] (average 1684 MBps) 00:06:52.550 00:06:52.550 00:06:52.550 real 0m0.633s 00:06:52.550 user 0m0.379s 00:06:52.550 sys 0m0.301s 00:06:52.550 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.550 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:52.550 ************************************ 00:06:52.550 END TEST dd_inflate_file 00:06:52.550 ************************************ 00:06:52.808 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:52.808 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:52.808 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:52.808 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:52.808 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:52.808 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.809 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:52.809 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.809 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:52.809 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.809 ************************************ 00:06:52.809 START TEST dd_copy_to_out_bdev 00:06:52.809 ************************************ 00:06:52.809 12:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:52.809 { 00:06:52.809 "subsystems": [ 00:06:52.809 { 00:06:52.809 "subsystem": "bdev", 00:06:52.809 "config": [ 00:06:52.809 { 00:06:52.809 "params": { 00:06:52.809 "trtype": "pcie", 00:06:52.809 "traddr": "0000:00:10.0", 00:06:52.809 "name": "Nvme0" 00:06:52.809 }, 00:06:52.809 "method": "bdev_nvme_attach_controller" 00:06:52.809 }, 00:06:52.809 { 00:06:52.809 "params": { 00:06:52.809 "trtype": "pcie", 00:06:52.809 "traddr": "0000:00:11.0", 00:06:52.809 "name": "Nvme1" 00:06:52.809 }, 00:06:52.809 "method": "bdev_nvme_attach_controller" 00:06:52.809 }, 00:06:52.809 { 00:06:52.809 "method": "bdev_wait_for_examine" 00:06:52.809 } 00:06:52.809 ] 00:06:52.809 } 00:06:52.809 ] 00:06:52.809 } 00:06:52.809 [2024-07-15 12:49:08.675931] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:52.809 [2024-07-15 12:49:08.676021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63770 ] 00:06:52.809 [2024-07-15 12:49:08.815267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.067 [2024-07-15 12:49:08.921295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.067 [2024-07-15 12:49:08.979041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.710  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:06:54.710 00:06:54.710 00:06:54.710 real 0m1.907s 00:06:54.710 user 0m1.659s 00:06:54.710 sys 0m1.459s 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:54.710 ************************************ 00:06:54.710 END TEST dd_copy_to_out_bdev 00:06:54.710 ************************************ 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:54.710 ************************************ 00:06:54.710 START TEST dd_offset_magic 00:06:54.710 ************************************ 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:54.710 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:54.711 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:54.711 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:54.711 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:54.711 12:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:54.711 [2024-07-15 12:49:10.626995] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:54.711 [2024-07-15 12:49:10.627095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63817 ] 00:06:54.711 { 00:06:54.711 "subsystems": [ 00:06:54.711 { 00:06:54.711 "subsystem": "bdev", 00:06:54.711 "config": [ 00:06:54.711 { 00:06:54.711 "params": { 00:06:54.711 "trtype": "pcie", 00:06:54.711 "traddr": "0000:00:10.0", 00:06:54.711 "name": "Nvme0" 00:06:54.711 }, 00:06:54.711 "method": "bdev_nvme_attach_controller" 00:06:54.711 }, 00:06:54.711 { 00:06:54.711 "params": { 00:06:54.711 "trtype": "pcie", 00:06:54.711 "traddr": "0000:00:11.0", 00:06:54.711 "name": "Nvme1" 00:06:54.711 }, 00:06:54.711 "method": "bdev_nvme_attach_controller" 00:06:54.711 }, 00:06:54.711 { 00:06:54.711 "method": "bdev_wait_for_examine" 00:06:54.711 } 00:06:54.711 ] 00:06:54.711 } 00:06:54.711 ] 00:06:54.711 } 00:06:54.711 [2024-07-15 12:49:10.761811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.969 [2024-07-15 12:49:10.864518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.969 [2024-07-15 12:49:10.921126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.487  Copying: 65/65 [MB] (average 866 MBps) 00:06:55.487 00:06:55.487 12:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:55.487 12:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:55.487 12:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:55.487 12:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:55.487 [2024-07-15 12:49:11.489009] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:55.487 [2024-07-15 12:49:11.489123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63826 ] 00:06:55.487 { 00:06:55.487 "subsystems": [ 00:06:55.487 { 00:06:55.487 "subsystem": "bdev", 00:06:55.487 "config": [ 00:06:55.488 { 00:06:55.488 "params": { 00:06:55.488 "trtype": "pcie", 00:06:55.488 "traddr": "0000:00:10.0", 00:06:55.488 "name": "Nvme0" 00:06:55.488 }, 00:06:55.488 "method": "bdev_nvme_attach_controller" 00:06:55.488 }, 00:06:55.488 { 00:06:55.488 "params": { 00:06:55.488 "trtype": "pcie", 00:06:55.488 "traddr": "0000:00:11.0", 00:06:55.488 "name": "Nvme1" 00:06:55.488 }, 00:06:55.488 "method": "bdev_nvme_attach_controller" 00:06:55.488 }, 00:06:55.488 { 00:06:55.488 "method": "bdev_wait_for_examine" 00:06:55.488 } 00:06:55.488 ] 00:06:55.488 } 00:06:55.488 ] 00:06:55.488 } 00:06:55.745 [2024-07-15 12:49:11.629572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.745 [2024-07-15 12:49:11.751163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.004 [2024-07-15 12:49:11.809245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.262  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:56.262 00:06:56.262 12:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:56.262 12:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:56.262 12:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:56.262 12:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:56.262 12:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:56.262 12:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:56.262 12:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:56.262 [2024-07-15 12:49:12.266600] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:56.262 [2024-07-15 12:49:12.266727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63848 ] 00:06:56.262 { 00:06:56.262 "subsystems": [ 00:06:56.262 { 00:06:56.262 "subsystem": "bdev", 00:06:56.262 "config": [ 00:06:56.262 { 00:06:56.262 "params": { 00:06:56.262 "trtype": "pcie", 00:06:56.262 "traddr": "0000:00:10.0", 00:06:56.262 "name": "Nvme0" 00:06:56.262 }, 00:06:56.262 "method": "bdev_nvme_attach_controller" 00:06:56.262 }, 00:06:56.262 { 00:06:56.262 "params": { 00:06:56.262 "trtype": "pcie", 00:06:56.262 "traddr": "0000:00:11.0", 00:06:56.262 "name": "Nvme1" 00:06:56.262 }, 00:06:56.263 "method": "bdev_nvme_attach_controller" 00:06:56.263 }, 00:06:56.263 { 00:06:56.263 "method": "bdev_wait_for_examine" 00:06:56.263 } 00:06:56.263 ] 00:06:56.263 } 00:06:56.263 ] 00:06:56.263 } 00:06:56.522 [2024-07-15 12:49:12.403514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.522 [2024-07-15 12:49:12.517772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.522 [2024-07-15 12:49:12.576934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.092  Copying: 65/65 [MB] (average 1101 MBps) 00:06:57.092 00:06:57.092 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:57.092 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:57.092 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:57.092 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:57.092 [2024-07-15 12:49:13.134576] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:57.092 [2024-07-15 12:49:13.134686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63868 ] 00:06:57.350 { 00:06:57.350 "subsystems": [ 00:06:57.350 { 00:06:57.350 "subsystem": "bdev", 00:06:57.350 "config": [ 00:06:57.350 { 00:06:57.350 "params": { 00:06:57.350 "trtype": "pcie", 00:06:57.350 "traddr": "0000:00:10.0", 00:06:57.350 "name": "Nvme0" 00:06:57.350 }, 00:06:57.350 "method": "bdev_nvme_attach_controller" 00:06:57.350 }, 00:06:57.350 { 00:06:57.350 "params": { 00:06:57.350 "trtype": "pcie", 00:06:57.350 "traddr": "0000:00:11.0", 00:06:57.350 "name": "Nvme1" 00:06:57.350 }, 00:06:57.350 "method": "bdev_nvme_attach_controller" 00:06:57.350 }, 00:06:57.350 { 00:06:57.350 "method": "bdev_wait_for_examine" 00:06:57.350 } 00:06:57.350 ] 00:06:57.350 } 00:06:57.350 ] 00:06:57.350 } 00:06:57.350 [2024-07-15 12:49:13.273398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.350 [2024-07-15 12:49:13.400079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.610 [2024-07-15 12:49:13.460112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.873  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:57.873 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:57.873 00:06:57.873 real 0m3.284s 00:06:57.873 user 0m2.401s 00:06:57.873 sys 0m0.972s 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:57.873 ************************************ 00:06:57.873 END TEST dd_offset_magic 00:06:57.873 ************************************ 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:57.873 12:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.132 [2024-07-15 12:49:13.947485] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.132 [2024-07-15 12:49:13.947600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63905 ] 00:06:58.132 { 00:06:58.132 "subsystems": [ 00:06:58.132 { 00:06:58.132 "subsystem": "bdev", 00:06:58.132 "config": [ 00:06:58.132 { 00:06:58.132 "params": { 00:06:58.133 "trtype": "pcie", 00:06:58.133 "traddr": "0000:00:10.0", 00:06:58.133 "name": "Nvme0" 00:06:58.133 }, 00:06:58.133 "method": "bdev_nvme_attach_controller" 00:06:58.133 }, 00:06:58.133 { 00:06:58.133 "params": { 00:06:58.133 "trtype": "pcie", 00:06:58.133 "traddr": "0000:00:11.0", 00:06:58.133 "name": "Nvme1" 00:06:58.133 }, 00:06:58.133 "method": "bdev_nvme_attach_controller" 00:06:58.133 }, 00:06:58.133 { 00:06:58.133 "method": "bdev_wait_for_examine" 00:06:58.133 } 00:06:58.133 ] 00:06:58.133 } 00:06:58.133 ] 00:06:58.133 } 00:06:58.133 [2024-07-15 12:49:14.081746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.133 [2024-07-15 12:49:14.186656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.391 [2024-07-15 12:49:14.242952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.648  Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:58.648 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.648 12:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.648 [2024-07-15 12:49:14.695607] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.648 [2024-07-15 12:49:14.695729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63916 ] 00:06:58.648 { 00:06:58.648 "subsystems": [ 00:06:58.648 { 00:06:58.648 "subsystem": "bdev", 00:06:58.648 "config": [ 00:06:58.648 { 00:06:58.648 "params": { 00:06:58.648 "trtype": "pcie", 00:06:58.648 "traddr": "0000:00:10.0", 00:06:58.648 "name": "Nvme0" 00:06:58.648 }, 00:06:58.648 "method": "bdev_nvme_attach_controller" 00:06:58.648 }, 00:06:58.648 { 00:06:58.648 "params": { 00:06:58.648 "trtype": "pcie", 00:06:58.648 "traddr": "0000:00:11.0", 00:06:58.648 "name": "Nvme1" 00:06:58.648 }, 00:06:58.648 "method": "bdev_nvme_attach_controller" 00:06:58.648 }, 00:06:58.648 { 00:06:58.648 "method": "bdev_wait_for_examine" 00:06:58.648 } 00:06:58.648 ] 00:06:58.648 } 00:06:58.648 ] 00:06:58.648 } 00:06:58.906 [2024-07-15 12:49:14.834666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.906 [2024-07-15 12:49:14.943610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.164 [2024-07-15 12:49:15.001767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.422  Copying: 5120/5120 [kB] (average 714 MBps) 00:06:59.422 00:06:59.422 12:49:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:59.422 00:06:59.422 real 0m7.595s 00:06:59.422 user 0m5.613s 00:06:59.422 sys 0m3.443s 00:06:59.422 12:49:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.422 12:49:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 ************************************ 00:06:59.422 END TEST spdk_dd_bdev_to_bdev 00:06:59.422 ************************************ 00:06:59.422 12:49:15 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:59.422 12:49:15 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:59.422 12:49:15 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:59.422 12:49:15 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.422 12:49:15 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.422 12:49:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 ************************************ 00:06:59.422 START TEST spdk_dd_uring 00:06:59.422 ************************************ 00:06:59.422 12:49:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:59.679 * Looking for test storage... 00:06:59.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.679 12:49:15 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.679 12:49:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.679 12:49:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.679 12:49:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:59.680 ************************************ 00:06:59.680 START TEST dd_uring_copy 00:06:59.680 ************************************ 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=i291fb43lsdnnys7smzak02neppr3wem6r9ilm2pi99n4rvjbahv5td9a18iwrfvnxhsh82jin95wklayiuunmcbzo0d9pkjhtfv8kt0eu3xggjklfpharbfal9fp49923iy6orriljhov5gn73wmy6o30ce94axy17h8r2hvb9juwxsv2jzdjj0yscn55gl27y0gciy3xj1slggf4uhe5g1w3jwy6ayzxm2144vj68g1eekzfzz48up00668nbluj3r6kyn0bgp2do8x3d7fs84s8r7y0l6qqzb3re3iscpmpxni9l1sh14iv61wq9nqoa2t0033blb496gcbndjq1rafidr7uhv42m1waucdkr55czyhz191pezyidvhf86ka0zj49ovbhtw5hp1wf1jmirrjqt0j5ag1l4qb6ykgv3qjgawav84a2s95xm5x9a6eu5fhjpq3jrio7u7bfvoggk95gtpkxdk64qct519rjkm3nu2z3br283lb8c77todeviu73ev7ubztqeow3nyi7s23v8n8fbh8y9d9rvo48lt3c1rec51cjw9ilwhptm342zxn6jem2rfuilzxddxsbxf15hv6ubrl4pfyi1lgsy7dzdtwsa3348nn9ewr0xfqtqy3cbfq1ytvs5cdsmahavrhamuomz1nex3luy4v4pwuik3m8lb6hst4em4j6x6ri34s6w5jd76jyyghutriimkfhw4aurkaiv383idid8976g3cncobzpu8a7ukbvpb0s7isuakzd52cxk91dp6bqljbjajpj61huh6z8fxkhkwnwrqhrogwj5zl29ws81908qmpoxgc5xbw12jdwl5vbe4kitgcotamk7w9xoy3za1jc4fe1chh3hvzhkfj7deda8e5i9xov7saachx83mtbadan5h78ytq6xe80lgz3re5nmczlrj03x04nt2itblrz7wlzi1orgcrbo6lbflfc560zmz14zu2a398tsuw8rbglp3qvvtyzzykgadx 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo i291fb43lsdnnys7smzak02neppr3wem6r9ilm2pi99n4rvjbahv5td9a18iwrfvnxhsh82jin95wklayiuunmcbzo0d9pkjhtfv8kt0eu3xggjklfpharbfal9fp49923iy6orriljhov5gn73wmy6o30ce94axy17h8r2hvb9juwxsv2jzdjj0yscn55gl27y0gciy3xj1slggf4uhe5g1w3jwy6ayzxm2144vj68g1eekzfzz48up00668nbluj3r6kyn0bgp2do8x3d7fs84s8r7y0l6qqzb3re3iscpmpxni9l1sh14iv61wq9nqoa2t0033blb496gcbndjq1rafidr7uhv42m1waucdkr55czyhz191pezyidvhf86ka0zj49ovbhtw5hp1wf1jmirrjqt0j5ag1l4qb6ykgv3qjgawav84a2s95xm5x9a6eu5fhjpq3jrio7u7bfvoggk95gtpkxdk64qct519rjkm3nu2z3br283lb8c77todeviu73ev7ubztqeow3nyi7s23v8n8fbh8y9d9rvo48lt3c1rec51cjw9ilwhptm342zxn6jem2rfuilzxddxsbxf15hv6ubrl4pfyi1lgsy7dzdtwsa3348nn9ewr0xfqtqy3cbfq1ytvs5cdsmahavrhamuomz1nex3luy4v4pwuik3m8lb6hst4em4j6x6ri34s6w5jd76jyyghutriimkfhw4aurkaiv383idid8976g3cncobzpu8a7ukbvpb0s7isuakzd52cxk91dp6bqljbjajpj61huh6z8fxkhkwnwrqhrogwj5zl29ws81908qmpoxgc5xbw12jdwl5vbe4kitgcotamk7w9xoy3za1jc4fe1chh3hvzhkfj7deda8e5i9xov7saachx83mtbadan5h78ytq6xe80lgz3re5nmczlrj03x04nt2itblrz7wlzi1orgcrbo6lbflfc560zmz14zu2a398tsuw8rbglp3qvvtyzzykgadx 00:06:59.680 12:49:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:59.680 [2024-07-15 12:49:15.654070] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:59.680 [2024-07-15 12:49:15.654150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:06:59.938 [2024-07-15 12:49:15.792897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.938 [2024-07-15 12:49:15.913795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.938 [2024-07-15 12:49:15.975657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.133  Copying: 511/511 [MB] (average 1309 MBps) 00:07:01.133 00:07:01.133 12:49:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:01.133 12:49:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:01.133 12:49:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.133 12:49:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.133 { 00:07:01.133 "subsystems": [ 00:07:01.133 { 00:07:01.133 "subsystem": "bdev", 00:07:01.133 "config": [ 00:07:01.133 { 00:07:01.133 "params": { 00:07:01.133 "block_size": 512, 00:07:01.133 "num_blocks": 1048576, 00:07:01.133 "name": "malloc0" 00:07:01.133 }, 00:07:01.133 "method": "bdev_malloc_create" 00:07:01.133 }, 00:07:01.133 { 00:07:01.133 "params": { 00:07:01.133 "filename": "/dev/zram1", 00:07:01.133 "name": "uring0" 00:07:01.133 }, 00:07:01.133 "method": "bdev_uring_create" 00:07:01.133 }, 00:07:01.133 { 00:07:01.133 "method": "bdev_wait_for_examine" 00:07:01.133 } 00:07:01.133 ] 00:07:01.133 } 00:07:01.133 ] 00:07:01.133 } 00:07:01.133 [2024-07-15 12:49:17.106996] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:01.133 [2024-07-15 12:49:17.107110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64012 ] 00:07:01.391 [2024-07-15 12:49:17.244976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.391 [2024-07-15 12:49:17.351449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.392 [2024-07-15 12:49:17.411765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.619  Copying: 213/512 [MB] (213 MBps) Copying: 424/512 [MB] (211 MBps) Copying: 512/512 [MB] (average 211 MBps) 00:07:04.619 00:07:04.619 12:49:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:04.619 12:49:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:04.619 12:49:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:04.619 12:49:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.619 [2024-07-15 12:49:20.542702] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:04.619 [2024-07-15 12:49:20.542835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64056 ] 00:07:04.619 { 00:07:04.619 "subsystems": [ 00:07:04.619 { 00:07:04.619 "subsystem": "bdev", 00:07:04.619 "config": [ 00:07:04.619 { 00:07:04.619 "params": { 00:07:04.619 "block_size": 512, 00:07:04.619 "num_blocks": 1048576, 00:07:04.619 "name": "malloc0" 00:07:04.619 }, 00:07:04.619 "method": "bdev_malloc_create" 00:07:04.619 }, 00:07:04.619 { 00:07:04.619 "params": { 00:07:04.619 "filename": "/dev/zram1", 00:07:04.619 "name": "uring0" 00:07:04.619 }, 00:07:04.619 "method": "bdev_uring_create" 00:07:04.619 }, 00:07:04.619 { 00:07:04.619 "method": "bdev_wait_for_examine" 00:07:04.619 } 00:07:04.619 ] 00:07:04.619 } 00:07:04.619 ] 00:07:04.619 } 00:07:04.878 [2024-07-15 12:49:20.680700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.878 [2024-07-15 12:49:20.795274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.878 [2024-07-15 12:49:20.857926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.695  Copying: 186/512 [MB] (186 MBps) Copying: 343/512 [MB] (156 MBps) Copying: 501/512 [MB] (158 MBps) Copying: 512/512 [MB] (average 167 MBps) 00:07:08.695 00:07:08.695 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:08.695 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ i291fb43lsdnnys7smzak02neppr3wem6r9ilm2pi99n4rvjbahv5td9a18iwrfvnxhsh82jin95wklayiuunmcbzo0d9pkjhtfv8kt0eu3xggjklfpharbfal9fp49923iy6orriljhov5gn73wmy6o30ce94axy17h8r2hvb9juwxsv2jzdjj0yscn55gl27y0gciy3xj1slggf4uhe5g1w3jwy6ayzxm2144vj68g1eekzfzz48up00668nbluj3r6kyn0bgp2do8x3d7fs84s8r7y0l6qqzb3re3iscpmpxni9l1sh14iv61wq9nqoa2t0033blb496gcbndjq1rafidr7uhv42m1waucdkr55czyhz191pezyidvhf86ka0zj49ovbhtw5hp1wf1jmirrjqt0j5ag1l4qb6ykgv3qjgawav84a2s95xm5x9a6eu5fhjpq3jrio7u7bfvoggk95gtpkxdk64qct519rjkm3nu2z3br283lb8c77todeviu73ev7ubztqeow3nyi7s23v8n8fbh8y9d9rvo48lt3c1rec51cjw9ilwhptm342zxn6jem2rfuilzxddxsbxf15hv6ubrl4pfyi1lgsy7dzdtwsa3348nn9ewr0xfqtqy3cbfq1ytvs5cdsmahavrhamuomz1nex3luy4v4pwuik3m8lb6hst4em4j6x6ri34s6w5jd76jyyghutriimkfhw4aurkaiv383idid8976g3cncobzpu8a7ukbvpb0s7isuakzd52cxk91dp6bqljbjajpj61huh6z8fxkhkwnwrqhrogwj5zl29ws81908qmpoxgc5xbw12jdwl5vbe4kitgcotamk7w9xoy3za1jc4fe1chh3hvzhkfj7deda8e5i9xov7saachx83mtbadan5h78ytq6xe80lgz3re5nmczlrj03x04nt2itblrz7wlzi1orgcrbo6lbflfc560zmz14zu2a398tsuw8rbglp3qvvtyzzykgadx == \i\2\9\1\f\b\4\3\l\s\d\n\n\y\s\7\s\m\z\a\k\0\2\n\e\p\p\r\3\w\e\m\6\r\9\i\l\m\2\p\i\9\9\n\4\r\v\j\b\a\h\v\5\t\d\9\a\1\8\i\w\r\f\v\n\x\h\s\h\8\2\j\i\n\9\5\w\k\l\a\y\i\u\u\n\m\c\b\z\o\0\d\9\p\k\j\h\t\f\v\8\k\t\0\e\u\3\x\g\g\j\k\l\f\p\h\a\r\b\f\a\l\9\f\p\4\9\9\2\3\i\y\6\o\r\r\i\l\j\h\o\v\5\g\n\7\3\w\m\y\6\o\3\0\c\e\9\4\a\x\y\1\7\h\8\r\2\h\v\b\9\j\u\w\x\s\v\2\j\z\d\j\j\0\y\s\c\n\5\5\g\l\2\7\y\0\g\c\i\y\3\x\j\1\s\l\g\g\f\4\u\h\e\5\g\1\w\3\j\w\y\6\a\y\z\x\m\2\1\4\4\v\j\6\8\g\1\e\e\k\z\f\z\z\4\8\u\p\0\0\6\6\8\n\b\l\u\j\3\r\6\k\y\n\0\b\g\p\2\d\o\8\x\3\d\7\f\s\8\4\s\8\r\7\y\0\l\6\q\q\z\b\3\r\e\3\i\s\c\p\m\p\x\n\i\9\l\1\s\h\1\4\i\v\6\1\w\q\9\n\q\o\a\2\t\0\0\3\3\b\l\b\4\9\6\g\c\b\n\d\j\q\1\r\a\f\i\d\r\7\u\h\v\4\2\m\1\w\a\u\c\d\k\r\5\5\c\z\y\h\z\1\9\1\p\e\z\y\i\d\v\h\f\8\6\k\a\0\z\j\4\9\o\v\b\h\t\w\5\h\p\1\w\f\1\j\m\i\r\r\j\q\t\0\j\5\a\g\1\l\4\q\b\6\y\k\g\v\3\q\j\g\a\w\a\v\8\4\a\2\s\9\5\x\m\5\x\9\a\6\e\u\5\f\h\j\p\q\3\j\r\i\o\7\u\7\b\f\v\o\g\g\k\9\5\g\t\p\k\x\d\k\6\4\q\c\t\5\1\9\r\j\k\m\3\n\u\2\z\3\b\r\2\8\3\l\b\8\c\7\7\t\o\d\e\v\i\u\7\3\e\v\7\u\b\z\t\q\e\o\w\3\n\y\i\7\s\2\3\v\8\n\8\f\b\h\8\y\9\d\9\r\v\o\4\8\l\t\3\c\1\r\e\c\5\1\c\j\w\9\i\l\w\h\p\t\m\3\4\2\z\x\n\6\j\e\m\2\r\f\u\i\l\z\x\d\d\x\s\b\x\f\1\5\h\v\6\u\b\r\l\4\p\f\y\i\1\l\g\s\y\7\d\z\d\t\w\s\a\3\3\4\8\n\n\9\e\w\r\0\x\f\q\t\q\y\3\c\b\f\q\1\y\t\v\s\5\c\d\s\m\a\h\a\v\r\h\a\m\u\o\m\z\1\n\e\x\3\l\u\y\4\v\4\p\w\u\i\k\3\m\8\l\b\6\h\s\t\4\e\m\4\j\6\x\6\r\i\3\4\s\6\w\5\j\d\7\6\j\y\y\g\h\u\t\r\i\i\m\k\f\h\w\4\a\u\r\k\a\i\v\3\8\3\i\d\i\d\8\9\7\6\g\3\c\n\c\o\b\z\p\u\8\a\7\u\k\b\v\p\b\0\s\7\i\s\u\a\k\z\d\5\2\c\x\k\9\1\d\p\6\b\q\l\j\b\j\a\j\p\j\6\1\h\u\h\6\z\8\f\x\k\h\k\w\n\w\r\q\h\r\o\g\w\j\5\z\l\2\9\w\s\8\1\9\0\8\q\m\p\o\x\g\c\5\x\b\w\1\2\j\d\w\l\5\v\b\e\4\k\i\t\g\c\o\t\a\m\k\7\w\9\x\o\y\3\z\a\1\j\c\4\f\e\1\c\h\h\3\h\v\z\h\k\f\j\7\d\e\d\a\8\e\5\i\9\x\o\v\7\s\a\a\c\h\x\8\3\m\t\b\a\d\a\n\5\h\7\8\y\t\q\6\x\e\8\0\l\g\z\3\r\e\5\n\m\c\z\l\r\j\0\3\x\0\4\n\t\2\i\t\b\l\r\z\7\w\l\z\i\1\o\r\g\c\r\b\o\6\l\b\f\l\f\c\5\6\0\z\m\z\1\4\z\u\2\a\3\9\8\t\s\u\w\8\r\b\g\l\p\3\q\v\v\t\y\z\z\y\k\g\a\d\x ]] 00:07:08.695 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:08.696 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ i291fb43lsdnnys7smzak02neppr3wem6r9ilm2pi99n4rvjbahv5td9a18iwrfvnxhsh82jin95wklayiuunmcbzo0d9pkjhtfv8kt0eu3xggjklfpharbfal9fp49923iy6orriljhov5gn73wmy6o30ce94axy17h8r2hvb9juwxsv2jzdjj0yscn55gl27y0gciy3xj1slggf4uhe5g1w3jwy6ayzxm2144vj68g1eekzfzz48up00668nbluj3r6kyn0bgp2do8x3d7fs84s8r7y0l6qqzb3re3iscpmpxni9l1sh14iv61wq9nqoa2t0033blb496gcbndjq1rafidr7uhv42m1waucdkr55czyhz191pezyidvhf86ka0zj49ovbhtw5hp1wf1jmirrjqt0j5ag1l4qb6ykgv3qjgawav84a2s95xm5x9a6eu5fhjpq3jrio7u7bfvoggk95gtpkxdk64qct519rjkm3nu2z3br283lb8c77todeviu73ev7ubztqeow3nyi7s23v8n8fbh8y9d9rvo48lt3c1rec51cjw9ilwhptm342zxn6jem2rfuilzxddxsbxf15hv6ubrl4pfyi1lgsy7dzdtwsa3348nn9ewr0xfqtqy3cbfq1ytvs5cdsmahavrhamuomz1nex3luy4v4pwuik3m8lb6hst4em4j6x6ri34s6w5jd76jyyghutriimkfhw4aurkaiv383idid8976g3cncobzpu8a7ukbvpb0s7isuakzd52cxk91dp6bqljbjajpj61huh6z8fxkhkwnwrqhrogwj5zl29ws81908qmpoxgc5xbw12jdwl5vbe4kitgcotamk7w9xoy3za1jc4fe1chh3hvzhkfj7deda8e5i9xov7saachx83mtbadan5h78ytq6xe80lgz3re5nmczlrj03x04nt2itblrz7wlzi1orgcrbo6lbflfc560zmz14zu2a398tsuw8rbglp3qvvtyzzykgadx == \i\2\9\1\f\b\4\3\l\s\d\n\n\y\s\7\s\m\z\a\k\0\2\n\e\p\p\r\3\w\e\m\6\r\9\i\l\m\2\p\i\9\9\n\4\r\v\j\b\a\h\v\5\t\d\9\a\1\8\i\w\r\f\v\n\x\h\s\h\8\2\j\i\n\9\5\w\k\l\a\y\i\u\u\n\m\c\b\z\o\0\d\9\p\k\j\h\t\f\v\8\k\t\0\e\u\3\x\g\g\j\k\l\f\p\h\a\r\b\f\a\l\9\f\p\4\9\9\2\3\i\y\6\o\r\r\i\l\j\h\o\v\5\g\n\7\3\w\m\y\6\o\3\0\c\e\9\4\a\x\y\1\7\h\8\r\2\h\v\b\9\j\u\w\x\s\v\2\j\z\d\j\j\0\y\s\c\n\5\5\g\l\2\7\y\0\g\c\i\y\3\x\j\1\s\l\g\g\f\4\u\h\e\5\g\1\w\3\j\w\y\6\a\y\z\x\m\2\1\4\4\v\j\6\8\g\1\e\e\k\z\f\z\z\4\8\u\p\0\0\6\6\8\n\b\l\u\j\3\r\6\k\y\n\0\b\g\p\2\d\o\8\x\3\d\7\f\s\8\4\s\8\r\7\y\0\l\6\q\q\z\b\3\r\e\3\i\s\c\p\m\p\x\n\i\9\l\1\s\h\1\4\i\v\6\1\w\q\9\n\q\o\a\2\t\0\0\3\3\b\l\b\4\9\6\g\c\b\n\d\j\q\1\r\a\f\i\d\r\7\u\h\v\4\2\m\1\w\a\u\c\d\k\r\5\5\c\z\y\h\z\1\9\1\p\e\z\y\i\d\v\h\f\8\6\k\a\0\z\j\4\9\o\v\b\h\t\w\5\h\p\1\w\f\1\j\m\i\r\r\j\q\t\0\j\5\a\g\1\l\4\q\b\6\y\k\g\v\3\q\j\g\a\w\a\v\8\4\a\2\s\9\5\x\m\5\x\9\a\6\e\u\5\f\h\j\p\q\3\j\r\i\o\7\u\7\b\f\v\o\g\g\k\9\5\g\t\p\k\x\d\k\6\4\q\c\t\5\1\9\r\j\k\m\3\n\u\2\z\3\b\r\2\8\3\l\b\8\c\7\7\t\o\d\e\v\i\u\7\3\e\v\7\u\b\z\t\q\e\o\w\3\n\y\i\7\s\2\3\v\8\n\8\f\b\h\8\y\9\d\9\r\v\o\4\8\l\t\3\c\1\r\e\c\5\1\c\j\w\9\i\l\w\h\p\t\m\3\4\2\z\x\n\6\j\e\m\2\r\f\u\i\l\z\x\d\d\x\s\b\x\f\1\5\h\v\6\u\b\r\l\4\p\f\y\i\1\l\g\s\y\7\d\z\d\t\w\s\a\3\3\4\8\n\n\9\e\w\r\0\x\f\q\t\q\y\3\c\b\f\q\1\y\t\v\s\5\c\d\s\m\a\h\a\v\r\h\a\m\u\o\m\z\1\n\e\x\3\l\u\y\4\v\4\p\w\u\i\k\3\m\8\l\b\6\h\s\t\4\e\m\4\j\6\x\6\r\i\3\4\s\6\w\5\j\d\7\6\j\y\y\g\h\u\t\r\i\i\m\k\f\h\w\4\a\u\r\k\a\i\v\3\8\3\i\d\i\d\8\9\7\6\g\3\c\n\c\o\b\z\p\u\8\a\7\u\k\b\v\p\b\0\s\7\i\s\u\a\k\z\d\5\2\c\x\k\9\1\d\p\6\b\q\l\j\b\j\a\j\p\j\6\1\h\u\h\6\z\8\f\x\k\h\k\w\n\w\r\q\h\r\o\g\w\j\5\z\l\2\9\w\s\8\1\9\0\8\q\m\p\o\x\g\c\5\x\b\w\1\2\j\d\w\l\5\v\b\e\4\k\i\t\g\c\o\t\a\m\k\7\w\9\x\o\y\3\z\a\1\j\c\4\f\e\1\c\h\h\3\h\v\z\h\k\f\j\7\d\e\d\a\8\e\5\i\9\x\o\v\7\s\a\a\c\h\x\8\3\m\t\b\a\d\a\n\5\h\7\8\y\t\q\6\x\e\8\0\l\g\z\3\r\e\5\n\m\c\z\l\r\j\0\3\x\0\4\n\t\2\i\t\b\l\r\z\7\w\l\z\i\1\o\r\g\c\r\b\o\6\l\b\f\l\f\c\5\6\0\z\m\z\1\4\z\u\2\a\3\9\8\t\s\u\w\8\r\b\g\l\p\3\q\v\v\t\y\z\z\y\k\g\a\d\x ]] 00:07:08.696 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:08.954 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:08.954 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:08.954 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:08.954 12:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.213 [2024-07-15 12:49:25.029406] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:09.213 [2024-07-15 12:49:25.029487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64139 ] 00:07:09.213 { 00:07:09.213 "subsystems": [ 00:07:09.213 { 00:07:09.213 "subsystem": "bdev", 00:07:09.213 "config": [ 00:07:09.213 { 00:07:09.213 "params": { 00:07:09.213 "block_size": 512, 00:07:09.213 "num_blocks": 1048576, 00:07:09.213 "name": "malloc0" 00:07:09.213 }, 00:07:09.213 "method": "bdev_malloc_create" 00:07:09.213 }, 00:07:09.213 { 00:07:09.213 "params": { 00:07:09.213 "filename": "/dev/zram1", 00:07:09.213 "name": "uring0" 00:07:09.213 }, 00:07:09.213 "method": "bdev_uring_create" 00:07:09.213 }, 00:07:09.213 { 00:07:09.213 "method": "bdev_wait_for_examine" 00:07:09.213 } 00:07:09.213 ] 00:07:09.213 } 00:07:09.213 ] 00:07:09.213 } 00:07:09.213 [2024-07-15 12:49:25.169978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.471 [2024-07-15 12:49:25.278054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.472 [2024-07-15 12:49:25.338750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.543  Copying: 148/512 [MB] (148 MBps) Copying: 294/512 [MB] (146 MBps) Copying: 441/512 [MB] (146 MBps) Copying: 512/512 [MB] (average 146 MBps) 00:07:13.543 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.543 12:49:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.543 [2024-07-15 12:49:29.513262] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:13.543 [2024-07-15 12:49:29.513374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64196 ] 00:07:13.543 { 00:07:13.543 "subsystems": [ 00:07:13.543 { 00:07:13.543 "subsystem": "bdev", 00:07:13.543 "config": [ 00:07:13.543 { 00:07:13.543 "params": { 00:07:13.543 "block_size": 512, 00:07:13.543 "num_blocks": 1048576, 00:07:13.543 "name": "malloc0" 00:07:13.543 }, 00:07:13.543 "method": "bdev_malloc_create" 00:07:13.543 }, 00:07:13.543 { 00:07:13.543 "params": { 00:07:13.543 "filename": "/dev/zram1", 00:07:13.543 "name": "uring0" 00:07:13.543 }, 00:07:13.543 "method": "bdev_uring_create" 00:07:13.543 }, 00:07:13.543 { 00:07:13.543 "params": { 00:07:13.543 "name": "uring0" 00:07:13.543 }, 00:07:13.543 "method": "bdev_uring_delete" 00:07:13.543 }, 00:07:13.543 { 00:07:13.543 "method": "bdev_wait_for_examine" 00:07:13.543 } 00:07:13.543 ] 00:07:13.543 } 00:07:13.543 ] 00:07:13.543 } 00:07:13.802 [2024-07-15 12:49:29.648064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.802 [2024-07-15 12:49:29.756314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.802 [2024-07-15 12:49:29.815047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.628  Copying: 0/0 [B] (average 0 Bps) 00:07:14.628 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.628 12:49:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:14.628 { 00:07:14.628 "subsystems": [ 00:07:14.628 { 00:07:14.628 "subsystem": "bdev", 00:07:14.628 "config": [ 00:07:14.628 { 00:07:14.628 "params": { 00:07:14.628 "block_size": 512, 00:07:14.628 "num_blocks": 1048576, 00:07:14.628 "name": "malloc0" 00:07:14.628 }, 00:07:14.628 "method": "bdev_malloc_create" 00:07:14.628 }, 00:07:14.628 { 00:07:14.628 "params": { 00:07:14.628 "filename": "/dev/zram1", 00:07:14.628 "name": "uring0" 00:07:14.628 }, 00:07:14.628 "method": "bdev_uring_create" 00:07:14.628 }, 00:07:14.628 { 00:07:14.628 "params": { 00:07:14.628 "name": "uring0" 00:07:14.628 }, 00:07:14.628 "method": "bdev_uring_delete" 00:07:14.628 }, 00:07:14.628 { 00:07:14.628 "method": "bdev_wait_for_examine" 00:07:14.628 } 00:07:14.628 ] 00:07:14.628 } 00:07:14.628 ] 00:07:14.628 } 00:07:14.628 [2024-07-15 12:49:30.514045] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:14.628 [2024-07-15 12:49:30.514136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64225 ] 00:07:14.628 [2024-07-15 12:49:30.653353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.886 [2024-07-15 12:49:30.757012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.886 [2024-07-15 12:49:30.815775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.144 [2024-07-15 12:49:31.031362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:15.144 [2024-07-15 12:49:31.031456] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:15.144 [2024-07-15 12:49:31.031469] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:15.144 [2024-07-15 12:49:31.031479] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.403 [2024-07-15 12:49:31.354875] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:15.403 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:15.713 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:15.713 00:07:15.713 real 0m16.146s 00:07:15.713 user 0m10.894s 00:07:15.713 sys 0m13.080s 00:07:15.713 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.713 12:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.713 ************************************ 00:07:15.713 END TEST dd_uring_copy 00:07:15.713 ************************************ 00:07:15.996 12:49:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:15.996 00:07:15.996 real 0m16.289s 00:07:15.996 user 0m10.949s 00:07:15.996 sys 0m13.171s 00:07:15.996 12:49:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.996 12:49:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:15.996 ************************************ 00:07:15.996 END TEST spdk_dd_uring 00:07:15.996 ************************************ 00:07:15.996 12:49:31 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:15.996 12:49:31 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:15.996 12:49:31 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.996 12:49:31 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.996 12:49:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:15.996 ************************************ 00:07:15.996 START TEST spdk_dd_sparse 00:07:15.996 ************************************ 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:15.996 * Looking for test storage... 00:07:15.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:15.996 1+0 records in 00:07:15.996 1+0 records out 00:07:15.996 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00665791 s, 630 MB/s 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:15.996 1+0 records in 00:07:15.996 1+0 records out 00:07:15.996 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00658598 s, 637 MB/s 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:15.996 1+0 records in 00:07:15.996 1+0 records out 00:07:15.996 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00548425 s, 765 MB/s 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:15.996 ************************************ 00:07:15.996 START TEST dd_sparse_file_to_file 00:07:15.996 ************************************ 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:15.996 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:15.997 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:15.997 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:15.997 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:15.997 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:15.997 12:49:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 [2024-07-15 12:49:32.001490] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:15.997 [2024-07-15 12:49:32.001588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64316 ] 00:07:15.997 { 00:07:15.997 "subsystems": [ 00:07:15.997 { 00:07:15.997 "subsystem": "bdev", 00:07:15.997 "config": [ 00:07:15.997 { 00:07:15.997 "params": { 00:07:15.997 "block_size": 4096, 00:07:15.997 "filename": "dd_sparse_aio_disk", 00:07:15.997 "name": "dd_aio" 00:07:15.997 }, 00:07:15.997 "method": "bdev_aio_create" 00:07:15.997 }, 00:07:15.997 { 00:07:15.997 "params": { 00:07:15.997 "lvs_name": "dd_lvstore", 00:07:15.997 "bdev_name": "dd_aio" 00:07:15.997 }, 00:07:15.997 "method": "bdev_lvol_create_lvstore" 00:07:15.997 }, 00:07:15.997 { 00:07:15.997 "method": "bdev_wait_for_examine" 00:07:15.997 } 00:07:15.997 ] 00:07:15.997 } 00:07:15.997 ] 00:07:15.997 } 00:07:16.255 [2024-07-15 12:49:32.140818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.255 [2024-07-15 12:49:32.267179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.513 [2024-07-15 12:49:32.325686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.771  Copying: 12/36 [MB] (average 1000 MBps) 00:07:16.771 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:16.771 00:07:16.771 real 0m0.746s 00:07:16.771 user 0m0.491s 00:07:16.771 sys 0m0.355s 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.771 ************************************ 00:07:16.771 END TEST dd_sparse_file_to_file 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:16.771 ************************************ 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:16.771 ************************************ 00:07:16.771 START TEST dd_sparse_file_to_bdev 00:07:16.771 ************************************ 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:16.771 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:16.772 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:16.772 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:16.772 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:16.772 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:16.772 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:16.772 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:16.772 12:49:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:16.772 [2024-07-15 12:49:32.788669] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:16.772 [2024-07-15 12:49:32.788749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64359 ] 00:07:16.772 { 00:07:16.772 "subsystems": [ 00:07:16.772 { 00:07:16.772 "subsystem": "bdev", 00:07:16.772 "config": [ 00:07:16.772 { 00:07:16.772 "params": { 00:07:16.772 "block_size": 4096, 00:07:16.772 "filename": "dd_sparse_aio_disk", 00:07:16.772 "name": "dd_aio" 00:07:16.772 }, 00:07:16.772 "method": "bdev_aio_create" 00:07:16.772 }, 00:07:16.772 { 00:07:16.772 "params": { 00:07:16.772 "lvs_name": "dd_lvstore", 00:07:16.772 "lvol_name": "dd_lvol", 00:07:16.772 "size_in_mib": 36, 00:07:16.772 "thin_provision": true 00:07:16.772 }, 00:07:16.772 "method": "bdev_lvol_create" 00:07:16.772 }, 00:07:16.772 { 00:07:16.772 "method": "bdev_wait_for_examine" 00:07:16.772 } 00:07:16.772 ] 00:07:16.772 } 00:07:16.772 ] 00:07:16.772 } 00:07:17.030 [2024-07-15 12:49:32.924011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.030 [2024-07-15 12:49:33.040862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.287 [2024-07-15 12:49:33.100318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.545  Copying: 12/36 [MB] (average 480 MBps) 00:07:17.545 00:07:17.545 00:07:17.545 real 0m0.721s 00:07:17.545 user 0m0.457s 00:07:17.545 sys 0m0.366s 00:07:17.545 ************************************ 00:07:17.545 END TEST dd_sparse_file_to_bdev 00:07:17.545 ************************************ 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:17.545 ************************************ 00:07:17.545 START TEST dd_sparse_bdev_to_file 00:07:17.545 ************************************ 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:17.545 12:49:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:17.545 { 00:07:17.545 "subsystems": [ 00:07:17.545 { 00:07:17.545 "subsystem": "bdev", 00:07:17.545 "config": [ 00:07:17.545 { 00:07:17.545 "params": { 00:07:17.545 "block_size": 4096, 00:07:17.545 "filename": "dd_sparse_aio_disk", 00:07:17.545 "name": "dd_aio" 00:07:17.545 }, 00:07:17.545 "method": "bdev_aio_create" 00:07:17.545 }, 00:07:17.545 { 00:07:17.545 "method": "bdev_wait_for_examine" 00:07:17.545 } 00:07:17.545 ] 00:07:17.545 } 00:07:17.545 ] 00:07:17.545 } 00:07:17.545 [2024-07-15 12:49:33.572528] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:17.545 [2024-07-15 12:49:33.572676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64398 ] 00:07:17.802 [2024-07-15 12:49:33.723214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.802 [2024-07-15 12:49:33.843053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.060 [2024-07-15 12:49:33.900093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.318  Copying: 12/36 [MB] (average 1090 MBps) 00:07:18.318 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:18.318 ************************************ 00:07:18.318 END TEST dd_sparse_bdev_to_file 00:07:18.318 ************************************ 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:18.318 00:07:18.318 real 0m0.776s 00:07:18.318 user 0m0.498s 00:07:18.318 sys 0m0.377s 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:18.318 ************************************ 00:07:18.318 END TEST spdk_dd_sparse 00:07:18.318 ************************************ 00:07:18.318 00:07:18.318 real 0m2.529s 00:07:18.318 user 0m1.541s 00:07:18.318 sys 0m1.287s 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.318 12:49:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:18.576 12:49:34 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:18.576 12:49:34 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:18.576 12:49:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.576 12:49:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.576 12:49:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:18.576 ************************************ 00:07:18.576 START TEST spdk_dd_negative 00:07:18.576 ************************************ 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:18.576 * Looking for test storage... 00:07:18.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.576 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.577 ************************************ 00:07:18.577 START TEST dd_invalid_arguments 00:07:18.577 ************************************ 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:18.577 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:18.577 00:07:18.577 CPU options: 00:07:18.577 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:18.577 (like [0,1,10]) 00:07:18.577 --lcores lcore to CPU mapping list. The list is in the format: 00:07:18.577 [<,lcores[@CPUs]>...] 00:07:18.577 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:18.577 Within the group, '-' is used for range separator, 00:07:18.577 ',' is used for single number separator. 00:07:18.577 '( )' can be omitted for single element group, 00:07:18.577 '@' can be omitted if cpus and lcores have the same value 00:07:18.577 --disable-cpumask-locks Disable CPU core lock files. 00:07:18.577 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:18.577 pollers in the app support interrupt mode) 00:07:18.577 -p, --main-core main (primary) core for DPDK 00:07:18.577 00:07:18.577 Configuration options: 00:07:18.577 -c, --config, --json JSON config file 00:07:18.577 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:18.577 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:18.577 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:18.577 --rpcs-allowed comma-separated list of permitted RPCS 00:07:18.577 --json-ignore-init-errors don't exit on invalid config entry 00:07:18.577 00:07:18.577 Memory options: 00:07:18.577 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:18.577 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:18.577 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:18.577 -R, --huge-unlink unlink huge files after initialization 00:07:18.577 -n, --mem-channels number of memory channels used for DPDK 00:07:18.577 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:18.577 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:18.577 --no-huge run without using hugepages 00:07:18.577 -i, --shm-id shared memory ID (optional) 00:07:18.577 -g, --single-file-segments force creating just one hugetlbfs file 00:07:18.577 00:07:18.577 PCI options: 00:07:18.577 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:18.577 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:18.577 -u, --no-pci disable PCI access 00:07:18.577 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:18.577 00:07:18.577 Log options: 00:07:18.577 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:18.577 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:18.577 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:18.577 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:18.577 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:18.577 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:18.577 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:18.577 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:18.577 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:18.577 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:18.577 virtio_vfio_user, vmd) 00:07:18.577 --silence-noticelog disable notice level logging to stderr 00:07:18.577 00:07:18.577 Trace options: 00:07:18.577 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:18.577 setting 0 to disable trace (default 32768) 00:07:18.577 Tracepoints vary in size and can use more than one trace entry. 00:07:18.577 -e, --tpoint-group [:] 00:07:18.577 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:18.577 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:18.577 [2024-07-15 12:49:34.559659] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:18.577 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:18.577 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:18.577 a tracepoint group. First tpoint inside a group can be enabled by 00:07:18.577 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:18.577 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:18.577 in /include/spdk_internal/trace_defs.h 00:07:18.577 00:07:18.577 Other options: 00:07:18.577 -h, --help show this usage 00:07:18.577 -v, --version print SPDK version 00:07:18.577 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:18.577 --env-context Opaque context for use of the env implementation 00:07:18.577 00:07:18.577 Application specific: 00:07:18.577 [--------- DD Options ---------] 00:07:18.577 --if Input file. Must specify either --if or --ib. 00:07:18.577 --ib Input bdev. Must specifier either --if or --ib 00:07:18.577 --of Output file. Must specify either --of or --ob. 00:07:18.577 --ob Output bdev. Must specify either --of or --ob. 00:07:18.577 --iflag Input file flags. 00:07:18.577 --oflag Output file flags. 00:07:18.577 --bs I/O unit size (default: 4096) 00:07:18.577 --qd Queue depth (default: 2) 00:07:18.577 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:18.577 --skip Skip this many I/O units at start of input. (default: 0) 00:07:18.577 --seek Skip this many I/O units at start of output. (default: 0) 00:07:18.577 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:18.577 --sparse Enable hole skipping in input target 00:07:18.577 Available iflag and oflag values: 00:07:18.577 append - append mode 00:07:18.577 direct - use direct I/O for data 00:07:18.577 directory - fail unless a directory 00:07:18.577 dsync - use synchronized I/O for data 00:07:18.577 noatime - do not update access time 00:07:18.577 noctty - do not assign controlling terminal from file 00:07:18.577 nofollow - do not follow symlinks 00:07:18.577 nonblock - use non-blocking I/O 00:07:18.577 sync - use synchronized I/O for data and metadata 00:07:18.577 ************************************ 00:07:18.577 END TEST dd_invalid_arguments 00:07:18.577 ************************************ 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.577 00:07:18.577 real 0m0.077s 00:07:18.577 user 0m0.044s 00:07:18.577 sys 0m0.032s 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.577 ************************************ 00:07:18.577 START TEST dd_double_input 00:07:18.577 ************************************ 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.577 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:18.836 [2024-07-15 12:49:34.709319] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:18.836 ************************************ 00:07:18.836 END TEST dd_double_input 00:07:18.836 ************************************ 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.836 00:07:18.836 real 0m0.111s 00:07:18.836 user 0m0.072s 00:07:18.836 sys 0m0.036s 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.836 ************************************ 00:07:18.836 START TEST dd_double_output 00:07:18.836 ************************************ 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:18.836 [2024-07-15 12:49:34.848234] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.836 00:07:18.836 real 0m0.077s 00:07:18.836 user 0m0.050s 00:07:18.836 sys 0m0.025s 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.836 ************************************ 00:07:18.836 END TEST dd_double_output 00:07:18.836 ************************************ 00:07:18.836 12:49:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.094 ************************************ 00:07:19.094 START TEST dd_no_input 00:07:19.094 ************************************ 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:19.094 [2024-07-15 12:49:34.968468] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:19.094 ************************************ 00:07:19.094 END TEST dd_no_input 00:07:19.094 ************************************ 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.094 00:07:19.094 real 0m0.072s 00:07:19.094 user 0m0.044s 00:07:19.094 sys 0m0.027s 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.094 12:49:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.094 ************************************ 00:07:19.094 START TEST dd_no_output 00:07:19.094 ************************************ 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.094 [2024-07-15 12:49:35.081088] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.094 00:07:19.094 real 0m0.074s 00:07:19.094 user 0m0.043s 00:07:19.094 sys 0m0.029s 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:19.094 ************************************ 00:07:19.094 END TEST dd_no_output 00:07:19.094 ************************************ 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.094 ************************************ 00:07:19.094 START TEST dd_wrong_blocksize 00:07:19.094 ************************************ 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.094 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.095 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.095 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:19.351 [2024-07-15 12:49:35.198266] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.351 00:07:19.351 real 0m0.072s 00:07:19.351 user 0m0.039s 00:07:19.351 sys 0m0.031s 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.351 ************************************ 00:07:19.351 END TEST dd_wrong_blocksize 00:07:19.351 ************************************ 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.351 ************************************ 00:07:19.351 START TEST dd_smaller_blocksize 00:07:19.351 ************************************ 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.351 12:49:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:19.351 [2024-07-15 12:49:35.322850] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:19.351 [2024-07-15 12:49:35.322935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64622 ] 00:07:19.609 [2024-07-15 12:49:35.464144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.609 [2024-07-15 12:49:35.569149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.609 [2024-07-15 12:49:35.627605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.865 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:20.125 [2024-07-15 12:49:35.943452] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:20.125 [2024-07-15 12:49:35.943530] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.125 [2024-07-15 12:49:36.064225] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.125 00:07:20.125 real 0m0.899s 00:07:20.125 user 0m0.409s 00:07:20.125 sys 0m0.382s 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.125 ************************************ 00:07:20.125 END TEST dd_smaller_blocksize 00:07:20.125 ************************************ 00:07:20.125 12:49:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:20.383 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.383 12:49:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.384 ************************************ 00:07:20.384 START TEST dd_invalid_count 00:07:20.384 ************************************ 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:20.384 [2024-07-15 12:49:36.268169] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.384 00:07:20.384 real 0m0.073s 00:07:20.384 user 0m0.049s 00:07:20.384 sys 0m0.023s 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:20.384 ************************************ 00:07:20.384 END TEST dd_invalid_count 00:07:20.384 ************************************ 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.384 ************************************ 00:07:20.384 START TEST dd_invalid_oflag 00:07:20.384 ************************************ 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:20.384 [2024-07-15 12:49:36.391323] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:20.384 ************************************ 00:07:20.384 END TEST dd_invalid_oflag 00:07:20.384 ************************************ 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.384 00:07:20.384 real 0m0.081s 00:07:20.384 user 0m0.050s 00:07:20.384 sys 0m0.030s 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.384 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.642 ************************************ 00:07:20.642 START TEST dd_invalid_iflag 00:07:20.642 ************************************ 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:20.642 [2024-07-15 12:49:36.524657] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.642 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.642 ************************************ 00:07:20.642 END TEST dd_invalid_iflag 00:07:20.642 ************************************ 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.643 00:07:20.643 real 0m0.076s 00:07:20.643 user 0m0.046s 00:07:20.643 sys 0m0.029s 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.643 ************************************ 00:07:20.643 START TEST dd_unknown_flag 00:07:20.643 ************************************ 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.643 12:49:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.643 [2024-07-15 12:49:36.654554] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:20.643 [2024-07-15 12:49:36.654677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64714 ] 00:07:20.900 [2024-07-15 12:49:36.796476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.900 [2024-07-15 12:49:36.922447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.158 [2024-07-15 12:49:36.983084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.158 [2024-07-15 12:49:37.022172] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:21.158 [2024-07-15 12:49:37.022261] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.158 [2024-07-15 12:49:37.022328] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:21.158 [2024-07-15 12:49:37.022345] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.158 [2024-07-15 12:49:37.024673] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:21.158 [2024-07-15 12:49:37.024711] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.158 [2024-07-15 12:49:37.024768] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:21.158 [2024-07-15 12:49:37.024782] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:21.158 [2024-07-15 12:49:37.148230] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.417 00:07:21.417 real 0m0.663s 00:07:21.417 user 0m0.389s 00:07:21.417 sys 0m0.179s 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.417 ************************************ 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:21.417 END TEST dd_unknown_flag 00:07:21.417 ************************************ 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.417 ************************************ 00:07:21.417 START TEST dd_invalid_json 00:07:21.417 ************************************ 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.417 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:21.417 [2024-07-15 12:49:37.371450] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:21.417 [2024-07-15 12:49:37.371569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64748 ] 00:07:21.677 [2024-07-15 12:49:37.515135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.677 [2024-07-15 12:49:37.630804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.677 [2024-07-15 12:49:37.630893] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:21.677 [2024-07-15 12:49:37.630915] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.677 [2024-07-15 12:49:37.630925] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.677 [2024-07-15 12:49:37.630972] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.677 00:07:21.677 real 0m0.417s 00:07:21.677 user 0m0.230s 00:07:21.677 sys 0m0.084s 00:07:21.677 ************************************ 00:07:21.677 END TEST dd_invalid_json 00:07:21.677 ************************************ 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.677 12:49:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.935 12:49:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:21.935 ************************************ 00:07:21.935 END TEST spdk_dd_negative 00:07:21.935 ************************************ 00:07:21.935 00:07:21.935 real 0m3.375s 00:07:21.935 user 0m1.709s 00:07:21.935 sys 0m1.302s 00:07:21.935 12:49:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.935 12:49:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.935 12:49:37 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:21.935 ************************************ 00:07:21.935 END TEST spdk_dd 00:07:21.936 ************************************ 00:07:21.936 00:07:21.936 real 1m20.638s 00:07:21.936 user 0m52.873s 00:07:21.936 sys 0m34.344s 00:07:21.936 12:49:37 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.936 12:49:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:21.936 12:49:37 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.936 12:49:37 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:21.936 12:49:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:21.936 12:49:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:21.936 12:49:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.936 12:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:21.936 12:49:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:21.936 12:49:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:21.936 12:49:37 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:21.936 12:49:37 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:21.936 12:49:37 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:21.936 12:49:37 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:21.936 12:49:37 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.936 12:49:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.936 12:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.936 12:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:21.936 ************************************ 00:07:21.936 START TEST nvmf_tcp 00:07:21.936 ************************************ 00:07:21.936 12:49:37 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.936 * Looking for test storage... 00:07:21.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.936 12:49:37 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.936 12:49:37 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.936 12:49:37 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.936 12:49:37 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.936 12:49:37 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.936 12:49:37 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.936 12:49:37 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:21.936 12:49:37 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.936 12:49:37 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.195 12:49:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:22.195 12:49:37 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:22.195 12:49:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:22.195 12:49:37 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.195 12:49:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 12:49:37 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:22.195 12:49:38 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.195 12:49:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.195 12:49:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.195 12:49:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 ************************************ 00:07:22.195 START TEST nvmf_host_management 00:07:22.195 ************************************ 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.195 * Looking for test storage... 00:07:22.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:22.195 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:22.196 Cannot find device "nvmf_init_br" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:22.196 Cannot find device "nvmf_tgt_br" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:22.196 Cannot find device "nvmf_tgt_br2" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:22.196 Cannot find device "nvmf_init_br" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:22.196 Cannot find device "nvmf_tgt_br" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:22.196 Cannot find device "nvmf_tgt_br2" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:22.196 Cannot find device "nvmf_br" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:22.196 Cannot find device "nvmf_init_if" 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:22.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:22.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:22.196 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:22.454 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:22.454 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:22.454 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:22.454 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:22.454 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:22.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:07:22.455 00:07:22.455 --- 10.0.0.2 ping statistics --- 00:07:22.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.455 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:22.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:22.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:07:22.455 00:07:22.455 --- 10.0.0.3 ping statistics --- 00:07:22.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.455 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:22.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:07:22.455 00:07:22.455 --- 10.0.0.1 ping statistics --- 00:07:22.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.455 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.455 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65007 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65007 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65007 ']' 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.713 12:49:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.713 [2024-07-15 12:49:38.574886] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:22.713 [2024-07-15 12:49:38.575003] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.713 [2024-07-15 12:49:38.718844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.971 [2024-07-15 12:49:38.860576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.971 [2024-07-15 12:49:38.860644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.971 [2024-07-15 12:49:38.860659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.971 [2024-07-15 12:49:38.860669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.971 [2024-07-15 12:49:38.860678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.971 [2024-07-15 12:49:38.861167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.971 [2024-07-15 12:49:38.862639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.971 [2024-07-15 12:49:38.862727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.971 [2024-07-15 12:49:38.862867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.971 [2024-07-15 12:49:38.922140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.964 [2024-07-15 12:49:39.648318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.964 Malloc0 00:07:23.964 [2024-07-15 12:49:39.726955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65069 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65069 /var/tmp/bdevperf.sock 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65069 ']' 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.964 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:23.965 { 00:07:23.965 "params": { 00:07:23.965 "name": "Nvme$subsystem", 00:07:23.965 "trtype": "$TEST_TRANSPORT", 00:07:23.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.965 "adrfam": "ipv4", 00:07:23.965 "trsvcid": "$NVMF_PORT", 00:07:23.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.965 "hdgst": ${hdgst:-false}, 00:07:23.965 "ddgst": ${ddgst:-false} 00:07:23.965 }, 00:07:23.965 "method": "bdev_nvme_attach_controller" 00:07:23.965 } 00:07:23.965 EOF 00:07:23.965 )") 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:23.965 12:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:23.965 "params": { 00:07:23.965 "name": "Nvme0", 00:07:23.965 "trtype": "tcp", 00:07:23.965 "traddr": "10.0.0.2", 00:07:23.965 "adrfam": "ipv4", 00:07:23.965 "trsvcid": "4420", 00:07:23.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:23.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:23.965 "hdgst": false, 00:07:23.965 "ddgst": false 00:07:23.965 }, 00:07:23.965 "method": "bdev_nvme_attach_controller" 00:07:23.965 }' 00:07:23.965 [2024-07-15 12:49:39.850353] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:23.965 [2024-07-15 12:49:39.850462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65069 ] 00:07:23.965 [2024-07-15 12:49:39.993216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.224 [2024-07-15 12:49:40.117764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.224 [2024-07-15 12:49:40.190877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.482 Running I/O for 10 seconds... 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.062 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.063 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.063 [2024-07-15 12:49:40.951433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.063 [2024-07-15 12:49:40.951953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.063 [2024-07-15 12:49:40.951964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.951973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.951984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.951994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 12:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.064 [2024-07-15 12:49:40.952598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 12:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:25.064 [2024-07-15 12:49:40.952836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.064 [2024-07-15 12:49:40.952925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.064 [2024-07-15 12:49:40.952936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.065 [2024-07-15 12:49:40.952945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.065 [2024-07-15 12:49:40.952955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088ec0 is same with the state(5) to be set 00:07:25.065 [2024-07-15 12:49:40.953059] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1088ec0 was disconnected and freed. reset controller. 00:07:25.065 [2024-07-15 12:49:40.953218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.065 [2024-07-15 12:49:40.953235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.065 [2024-07-15 12:49:40.953247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.065 [2024-07-15 12:49:40.953256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.065 [2024-07-15 12:49:40.953266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.065 [2024-07-15 12:49:40.953275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.065 [2024-07-15 12:49:40.953285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.065 [2024-07-15 12:49:40.953294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.065 [2024-07-15 12:49:40.953304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1080d50 is same with the state(5) to be set 00:07:25.065 [2024-07-15 12:49:40.954403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:25.065 task offset: 122880 on job bdev=Nvme0n1 fails 00:07:25.065 00:07:25.065 Latency(us) 00:07:25.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.065 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.065 Job: Nvme0n1 ended in about 0.64 seconds with error 00:07:25.065 Verification LBA range: start 0x0 length 0x400 00:07:25.065 Nvme0n1 : 0.64 1494.98 93.44 99.67 0.00 39094.64 2323.55 38130.04 00:07:25.065 =================================================================================================================== 00:07:25.065 Total : 1494.98 93.44 99.67 0.00 39094.64 2323.55 38130.04 00:07:25.065 [2024-07-15 12:49:40.956565] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.065 [2024-07-15 12:49:40.956599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1080d50 (9): Bad file descriptor 00:07:25.065 [2024-07-15 12:49:40.959284] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65069 00:07:26.006 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65069) - No such process 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:26.006 { 00:07:26.006 "params": { 00:07:26.006 "name": "Nvme$subsystem", 00:07:26.006 "trtype": "$TEST_TRANSPORT", 00:07:26.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.006 "adrfam": "ipv4", 00:07:26.006 "trsvcid": "$NVMF_PORT", 00:07:26.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.006 "hdgst": ${hdgst:-false}, 00:07:26.006 "ddgst": ${ddgst:-false} 00:07:26.006 }, 00:07:26.006 "method": "bdev_nvme_attach_controller" 00:07:26.006 } 00:07:26.006 EOF 00:07:26.006 )") 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:26.006 12:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:26.006 "params": { 00:07:26.006 "name": "Nvme0", 00:07:26.006 "trtype": "tcp", 00:07:26.006 "traddr": "10.0.0.2", 00:07:26.006 "adrfam": "ipv4", 00:07:26.006 "trsvcid": "4420", 00:07:26.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.006 "hdgst": false, 00:07:26.006 "ddgst": false 00:07:26.006 }, 00:07:26.006 "method": "bdev_nvme_attach_controller" 00:07:26.006 }' 00:07:26.006 [2024-07-15 12:49:42.006172] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:26.006 [2024-07-15 12:49:42.006262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65107 ] 00:07:26.264 [2024-07-15 12:49:42.146984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.264 [2024-07-15 12:49:42.277065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.522 [2024-07-15 12:49:42.344583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.522 Running I/O for 1 seconds... 00:07:27.483 00:07:27.483 Latency(us) 00:07:27.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.483 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:27.483 Verification LBA range: start 0x0 length 0x400 00:07:27.483 Nvme0n1 : 1.04 1535.84 95.99 0.00 0.00 40856.81 4438.57 39559.91 00:07:27.483 =================================================================================================================== 00:07:27.483 Total : 1535.84 95.99 0.00 0.00 40856.81 4438.57 39559.91 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.741 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.741 rmmod nvme_tcp 00:07:27.999 rmmod nvme_fabrics 00:07:27.999 rmmod nvme_keyring 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65007 ']' 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65007 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65007 ']' 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65007 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65007 00:07:27.999 killing process with pid 65007 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65007' 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65007 00:07:27.999 12:49:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65007 00:07:28.259 [2024-07-15 12:49:44.092889] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:28.259 00:07:28.259 real 0m6.149s 00:07:28.259 user 0m23.846s 00:07:28.259 sys 0m1.588s 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.259 ************************************ 00:07:28.259 END TEST nvmf_host_management 00:07:28.259 ************************************ 00:07:28.259 12:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.259 12:49:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:28.259 12:49:44 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.259 12:49:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.259 12:49:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.259 12:49:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.259 ************************************ 00:07:28.259 START TEST nvmf_lvol 00:07:28.259 ************************************ 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.259 * Looking for test storage... 00:07:28.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.259 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:28.260 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:28.519 Cannot find device "nvmf_tgt_br" 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:28.519 Cannot find device "nvmf_tgt_br2" 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:28.519 Cannot find device "nvmf_tgt_br" 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:28.519 Cannot find device "nvmf_tgt_br2" 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:28.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:28.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:28.519 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:28.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:07:28.778 00:07:28.778 --- 10.0.0.2 ping statistics --- 00:07:28.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.778 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:28.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:28.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:28.778 00:07:28.778 --- 10.0.0.3 ping statistics --- 00:07:28.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.778 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:28.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:07:28.778 00:07:28.778 --- 10.0.0.1 ping statistics --- 00:07:28.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.778 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65331 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65331 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65331 ']' 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.778 12:49:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.778 [2024-07-15 12:49:44.698570] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:28.778 [2024-07-15 12:49:44.698670] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.037 [2024-07-15 12:49:44.838769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.037 [2024-07-15 12:49:44.957974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.037 [2024-07-15 12:49:44.958552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.037 [2024-07-15 12:49:44.958865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.037 [2024-07-15 12:49:44.959299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.037 [2024-07-15 12:49:44.959558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.037 [2024-07-15 12:49:44.959920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.037 [2024-07-15 12:49:44.960250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.037 [2024-07-15 12:49:44.960285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.037 [2024-07-15 12:49:45.020788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.602 12:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.602 12:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:29.602 12:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.602 12:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.602 12:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.602 12:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.602 12:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.860 [2024-07-15 12:49:45.862674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.860 12:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.119 12:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.119 12:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.687 12:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:30.687 12:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:30.687 12:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:30.978 12:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3bc62789-2c2f-483e-b240-a0682215ee4e 00:07:30.978 12:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3bc62789-2c2f-483e-b240-a0682215ee4e lvol 20 00:07:31.237 12:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=47818107-096a-4b2b-847c-27b0cdba322b 00:07:31.237 12:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.522 12:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 47818107-096a-4b2b-847c-27b0cdba322b 00:07:31.780 12:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.038 [2024-07-15 12:49:47.994940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.038 12:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.296 12:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65401 00:07:32.296 12:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:32.296 12:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:33.670 12:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 47818107-096a-4b2b-847c-27b0cdba322b MY_SNAPSHOT 00:07:33.670 12:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f19d8687-d080-40cb-a57f-9852f06b1e3a 00:07:33.670 12:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 47818107-096a-4b2b-847c-27b0cdba322b 30 00:07:33.928 12:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f19d8687-d080-40cb-a57f-9852f06b1e3a MY_CLONE 00:07:34.186 12:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=00d3d0cd-8e34-4ba7-b621-4fc237a277eb 00:07:34.186 12:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 00d3d0cd-8e34-4ba7-b621-4fc237a277eb 00:07:34.814 12:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65401 00:07:42.933 Initializing NVMe Controllers 00:07:42.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.933 Controller IO queue size 128, less than required. 00:07:42.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.933 Initialization complete. Launching workers. 00:07:42.933 ======================================================== 00:07:42.933 Latency(us) 00:07:42.933 Device Information : IOPS MiB/s Average min max 00:07:42.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10361.00 40.47 12359.50 2823.19 61162.73 00:07:42.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10690.90 41.76 11979.67 1847.54 100565.34 00:07:42.933 ======================================================== 00:07:42.933 Total : 21051.90 82.23 12166.61 1847.54 100565.34 00:07:42.933 00:07:42.933 12:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.933 12:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 47818107-096a-4b2b-847c-27b0cdba322b 00:07:43.191 12:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bc62789-2c2f-483e-b240-a0682215ee4e 00:07:43.449 12:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:43.449 12:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:43.449 12:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:43.449 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.449 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.708 rmmod nvme_tcp 00:07:43.708 rmmod nvme_fabrics 00:07:43.708 rmmod nvme_keyring 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65331 ']' 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65331 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65331 ']' 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65331 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65331 00:07:43.708 killing process with pid 65331 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65331' 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65331 00:07:43.708 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65331 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:43.966 ************************************ 00:07:43.966 END TEST nvmf_lvol 00:07:43.966 ************************************ 00:07:43.966 00:07:43.966 real 0m15.707s 00:07:43.966 user 1m5.329s 00:07:43.966 sys 0m4.252s 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.966 12:49:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:43.966 12:49:59 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:43.966 12:49:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.966 12:49:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.966 12:49:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.966 ************************************ 00:07:43.966 START TEST nvmf_lvs_grow 00:07:43.966 ************************************ 00:07:43.966 12:49:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:44.225 * Looking for test storage... 00:07:44.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:44.225 Cannot find device "nvmf_tgt_br" 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:44.225 Cannot find device "nvmf_tgt_br2" 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:44.225 Cannot find device "nvmf_tgt_br" 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:44.225 Cannot find device "nvmf_tgt_br2" 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:44.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:44.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:44.225 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:44.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:07:44.484 00:07:44.484 --- 10.0.0.2 ping statistics --- 00:07:44.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.484 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:44.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:44.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:07:44.484 00:07:44.484 --- 10.0.0.3 ping statistics --- 00:07:44.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.484 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:44.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:44.484 00:07:44.484 --- 10.0.0.1 ping statistics --- 00:07:44.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.484 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.484 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65722 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65722 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65722 ']' 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.485 12:50:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.485 [2024-07-15 12:50:00.525980] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:44.485 [2024-07-15 12:50:00.526104] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.746 [2024-07-15 12:50:00.670611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.746 [2024-07-15 12:50:00.778872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.746 [2024-07-15 12:50:00.778926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.746 [2024-07-15 12:50:00.778955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.746 [2024-07-15 12:50:00.778963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.746 [2024-07-15 12:50:00.778970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.746 [2024-07-15 12:50:00.778999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.005 [2024-07-15 12:50:00.831786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.572 12:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.572 12:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:45.572 12:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.572 12:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.572 12:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.572 12:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.572 12:50:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:45.831 [2024-07-15 12:50:01.805215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.831 ************************************ 00:07:45.831 START TEST lvs_grow_clean 00:07:45.831 ************************************ 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.831 12:50:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.396 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:46.396 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:46.396 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=05a033d1-3777-4f6b-839c-86f108333fce 00:07:46.396 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:46.396 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:07:46.964 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:46.964 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:46.964 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 05a033d1-3777-4f6b-839c-86f108333fce lvol 150 00:07:46.964 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2dd76d2-7190-4e1f-8bfc-43c595476c10 00:07:46.964 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:46.964 12:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:47.224 [2024-07-15 12:50:03.164342] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:47.224 [2024-07-15 12:50:03.164490] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:47.224 true 00:07:47.224 12:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:07:47.224 12:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:47.485 12:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:47.485 12:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.743 12:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2dd76d2-7190-4e1f-8bfc-43c595476c10 00:07:48.001 12:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.260 [2024-07-15 12:50:04.170110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.260 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65810 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65810 /var/tmp/bdevperf.sock 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65810 ']' 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.518 12:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:48.518 [2024-07-15 12:50:04.508778] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:48.518 [2024-07-15 12:50:04.509285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65810 ] 00:07:48.776 [2024-07-15 12:50:04.649926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.776 [2024-07-15 12:50:04.759917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.776 [2024-07-15 12:50:04.813802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.715 12:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.715 12:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:49.715 12:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.715 Nvme0n1 00:07:49.715 12:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:49.972 [ 00:07:49.972 { 00:07:49.972 "name": "Nvme0n1", 00:07:49.972 "aliases": [ 00:07:49.972 "d2dd76d2-7190-4e1f-8bfc-43c595476c10" 00:07:49.972 ], 00:07:49.972 "product_name": "NVMe disk", 00:07:49.972 "block_size": 4096, 00:07:49.972 "num_blocks": 38912, 00:07:49.972 "uuid": "d2dd76d2-7190-4e1f-8bfc-43c595476c10", 00:07:49.972 "assigned_rate_limits": { 00:07:49.972 "rw_ios_per_sec": 0, 00:07:49.972 "rw_mbytes_per_sec": 0, 00:07:49.972 "r_mbytes_per_sec": 0, 00:07:49.972 "w_mbytes_per_sec": 0 00:07:49.972 }, 00:07:49.972 "claimed": false, 00:07:49.972 "zoned": false, 00:07:49.972 "supported_io_types": { 00:07:49.972 "read": true, 00:07:49.972 "write": true, 00:07:49.972 "unmap": true, 00:07:49.972 "flush": true, 00:07:49.972 "reset": true, 00:07:49.972 "nvme_admin": true, 00:07:49.972 "nvme_io": true, 00:07:49.972 "nvme_io_md": false, 00:07:49.972 "write_zeroes": true, 00:07:49.972 "zcopy": false, 00:07:49.972 "get_zone_info": false, 00:07:49.972 "zone_management": false, 00:07:49.972 "zone_append": false, 00:07:49.972 "compare": true, 00:07:49.972 "compare_and_write": true, 00:07:49.972 "abort": true, 00:07:49.972 "seek_hole": false, 00:07:49.972 "seek_data": false, 00:07:49.972 "copy": true, 00:07:49.972 "nvme_iov_md": false 00:07:49.972 }, 00:07:49.972 "memory_domains": [ 00:07:49.972 { 00:07:49.972 "dma_device_id": "system", 00:07:49.972 "dma_device_type": 1 00:07:49.972 } 00:07:49.972 ], 00:07:49.972 "driver_specific": { 00:07:49.972 "nvme": [ 00:07:49.972 { 00:07:49.972 "trid": { 00:07:49.972 "trtype": "TCP", 00:07:49.972 "adrfam": "IPv4", 00:07:49.972 "traddr": "10.0.0.2", 00:07:49.972 "trsvcid": "4420", 00:07:49.972 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:49.972 }, 00:07:49.972 "ctrlr_data": { 00:07:49.972 "cntlid": 1, 00:07:49.972 "vendor_id": "0x8086", 00:07:49.972 "model_number": "SPDK bdev Controller", 00:07:49.972 "serial_number": "SPDK0", 00:07:49.972 "firmware_revision": "24.09", 00:07:49.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.972 "oacs": { 00:07:49.972 "security": 0, 00:07:49.972 "format": 0, 00:07:49.972 "firmware": 0, 00:07:49.972 "ns_manage": 0 00:07:49.972 }, 00:07:49.972 "multi_ctrlr": true, 00:07:49.972 "ana_reporting": false 00:07:49.972 }, 00:07:49.972 "vs": { 00:07:49.972 "nvme_version": "1.3" 00:07:49.972 }, 00:07:49.972 "ns_data": { 00:07:49.972 "id": 1, 00:07:49.972 "can_share": true 00:07:49.972 } 00:07:49.972 } 00:07:49.972 ], 00:07:49.972 "mp_policy": "active_passive" 00:07:49.972 } 00:07:49.972 } 00:07:49.972 ] 00:07:49.972 12:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65839 00:07:49.972 12:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.972 12:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:50.229 Running I/O for 10 seconds... 00:07:51.161 Latency(us) 00:07:51.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.161 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:51.161 =================================================================================================================== 00:07:51.161 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:51.161 00:07:52.094 12:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 05a033d1-3777-4f6b-839c-86f108333fce 00:07:52.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.094 Nvme0n1 : 2.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:07:52.094 =================================================================================================================== 00:07:52.094 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:07:52.094 00:07:52.353 true 00:07:52.353 12:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:07:52.353 12:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.610 12:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.610 12:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.610 12:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65839 00:07:53.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.178 Nvme0n1 : 3.00 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:07:53.178 =================================================================================================================== 00:07:53.178 Total : 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:07:53.178 00:07:54.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.112 Nvme0n1 : 4.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:07:54.112 =================================================================================================================== 00:07:54.112 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:07:54.112 00:07:55.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.487 Nvme0n1 : 5.00 7518.40 29.37 0.00 0.00 0.00 0.00 0.00 00:07:55.487 =================================================================================================================== 00:07:55.487 Total : 7518.40 29.37 0.00 0.00 0.00 0.00 0.00 00:07:55.487 00:07:56.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.422 Nvme0n1 : 6.00 7471.83 29.19 0.00 0.00 0.00 0.00 0.00 00:07:56.422 =================================================================================================================== 00:07:56.422 Total : 7471.83 29.19 0.00 0.00 0.00 0.00 0.00 00:07:56.422 00:07:57.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.367 Nvme0n1 : 7.00 7420.43 28.99 0.00 0.00 0.00 0.00 0.00 00:07:57.367 =================================================================================================================== 00:07:57.367 Total : 7420.43 28.99 0.00 0.00 0.00 0.00 0.00 00:07:57.367 00:07:58.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.302 Nvme0n1 : 8.00 7413.62 28.96 0.00 0.00 0.00 0.00 0.00 00:07:58.302 =================================================================================================================== 00:07:58.302 Total : 7413.62 28.96 0.00 0.00 0.00 0.00 0.00 00:07:58.302 00:07:59.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.232 Nvme0n1 : 9.00 7394.22 28.88 0.00 0.00 0.00 0.00 0.00 00:07:59.232 =================================================================================================================== 00:07:59.232 Total : 7394.22 28.88 0.00 0.00 0.00 0.00 0.00 00:07:59.232 00:08:00.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.163 Nvme0n1 : 10.00 7404.10 28.92 0.00 0.00 0.00 0.00 0.00 00:08:00.163 =================================================================================================================== 00:08:00.163 Total : 7404.10 28.92 0.00 0.00 0.00 0.00 0.00 00:08:00.163 00:08:00.163 00:08:00.163 Latency(us) 00:08:00.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.163 Nvme0n1 : 10.02 7403.73 28.92 0.00 0.00 17283.02 14239.19 38844.97 00:08:00.163 =================================================================================================================== 00:08:00.163 Total : 7403.73 28.92 0.00 0.00 17283.02 14239.19 38844.97 00:08:00.163 0 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65810 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65810 ']' 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65810 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65810 00:08:00.163 killing process with pid 65810 00:08:00.163 Received shutdown signal, test time was about 10.000000 seconds 00:08:00.163 00:08:00.163 Latency(us) 00:08:00.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.163 =================================================================================================================== 00:08:00.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65810' 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65810 00:08:00.163 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65810 00:08:00.419 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.677 12:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.241 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:08:01.241 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:01.241 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:01.241 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:01.241 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.499 [2024-07-15 12:50:17.518112] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.499 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.757 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.757 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.757 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:01.757 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:08:01.757 request: 00:08:01.757 { 00:08:01.757 "uuid": "05a033d1-3777-4f6b-839c-86f108333fce", 00:08:01.757 "method": "bdev_lvol_get_lvstores", 00:08:01.757 "req_id": 1 00:08:01.757 } 00:08:01.757 Got JSON-RPC error response 00:08:01.757 response: 00:08:01.757 { 00:08:01.757 "code": -19, 00:08:01.757 "message": "No such device" 00:08:01.757 } 00:08:01.757 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:01.757 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.758 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.758 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.758 12:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.015 aio_bdev 00:08:02.015 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d2dd76d2-7190-4e1f-8bfc-43c595476c10 00:08:02.015 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d2dd76d2-7190-4e1f-8bfc-43c595476c10 00:08:02.015 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:02.015 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:02.015 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:02.015 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:02.015 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.273 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2dd76d2-7190-4e1f-8bfc-43c595476c10 -t 2000 00:08:02.538 [ 00:08:02.538 { 00:08:02.538 "name": "d2dd76d2-7190-4e1f-8bfc-43c595476c10", 00:08:02.538 "aliases": [ 00:08:02.538 "lvs/lvol" 00:08:02.538 ], 00:08:02.538 "product_name": "Logical Volume", 00:08:02.538 "block_size": 4096, 00:08:02.538 "num_blocks": 38912, 00:08:02.538 "uuid": "d2dd76d2-7190-4e1f-8bfc-43c595476c10", 00:08:02.538 "assigned_rate_limits": { 00:08:02.538 "rw_ios_per_sec": 0, 00:08:02.538 "rw_mbytes_per_sec": 0, 00:08:02.538 "r_mbytes_per_sec": 0, 00:08:02.538 "w_mbytes_per_sec": 0 00:08:02.538 }, 00:08:02.538 "claimed": false, 00:08:02.538 "zoned": false, 00:08:02.538 "supported_io_types": { 00:08:02.538 "read": true, 00:08:02.538 "write": true, 00:08:02.538 "unmap": true, 00:08:02.538 "flush": false, 00:08:02.538 "reset": true, 00:08:02.538 "nvme_admin": false, 00:08:02.538 "nvme_io": false, 00:08:02.538 "nvme_io_md": false, 00:08:02.538 "write_zeroes": true, 00:08:02.538 "zcopy": false, 00:08:02.538 "get_zone_info": false, 00:08:02.538 "zone_management": false, 00:08:02.538 "zone_append": false, 00:08:02.538 "compare": false, 00:08:02.538 "compare_and_write": false, 00:08:02.538 "abort": false, 00:08:02.538 "seek_hole": true, 00:08:02.538 "seek_data": true, 00:08:02.538 "copy": false, 00:08:02.538 "nvme_iov_md": false 00:08:02.538 }, 00:08:02.538 "driver_specific": { 00:08:02.538 "lvol": { 00:08:02.538 "lvol_store_uuid": "05a033d1-3777-4f6b-839c-86f108333fce", 00:08:02.538 "base_bdev": "aio_bdev", 00:08:02.538 "thin_provision": false, 00:08:02.538 "num_allocated_clusters": 38, 00:08:02.538 "snapshot": false, 00:08:02.538 "clone": false, 00:08:02.538 "esnap_clone": false 00:08:02.538 } 00:08:02.538 } 00:08:02.538 } 00:08:02.538 ] 00:08:02.538 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:02.538 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:08:02.538 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.819 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:02.819 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05a033d1-3777-4f6b-839c-86f108333fce 00:08:02.819 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:03.077 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:03.077 12:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d2dd76d2-7190-4e1f-8bfc-43c595476c10 00:08:03.335 12:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05a033d1-3777-4f6b-839c-86f108333fce 00:08:03.593 12:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.873 12:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.130 ************************************ 00:08:04.130 END TEST lvs_grow_clean 00:08:04.130 ************************************ 00:08:04.130 00:08:04.130 real 0m18.247s 00:08:04.130 user 0m17.119s 00:08:04.130 sys 0m2.540s 00:08:04.130 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.130 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:04.130 12:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:04.130 12:50:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:04.130 12:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.131 ************************************ 00:08:04.131 START TEST lvs_grow_dirty 00:08:04.131 ************************************ 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.131 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.387 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:04.387 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:04.949 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:04.949 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:04.949 12:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:05.206 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:05.206 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:05.206 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 32071464-e1aa-4935-94ac-8255cc4d20f0 lvol 150 00:08:05.463 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8350e380-589f-4172-886f-9017c8744c58 00:08:05.463 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.463 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:05.730 [2024-07-15 12:50:21.544148] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:05.730 [2024-07-15 12:50:21.544242] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:05.730 true 00:08:05.730 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:05.730 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:06.006 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:06.006 12:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.006 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8350e380-589f-4172-886f-9017c8744c58 00:08:06.263 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:06.520 [2024-07-15 12:50:22.504696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.520 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66081 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66081 /var/tmp/bdevperf.sock 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66081 ']' 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.777 12:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.777 [2024-07-15 12:50:22.813508] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:06.777 [2024-07-15 12:50:22.813599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66081 ] 00:08:07.035 [2024-07-15 12:50:22.952616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.035 [2024-07-15 12:50:23.093644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.313 [2024-07-15 12:50:23.149349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.885 12:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.885 12:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:07.885 12:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:08.143 Nvme0n1 00:08:08.143 12:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:08.401 [ 00:08:08.401 { 00:08:08.401 "name": "Nvme0n1", 00:08:08.401 "aliases": [ 00:08:08.401 "8350e380-589f-4172-886f-9017c8744c58" 00:08:08.401 ], 00:08:08.401 "product_name": "NVMe disk", 00:08:08.401 "block_size": 4096, 00:08:08.401 "num_blocks": 38912, 00:08:08.401 "uuid": "8350e380-589f-4172-886f-9017c8744c58", 00:08:08.401 "assigned_rate_limits": { 00:08:08.401 "rw_ios_per_sec": 0, 00:08:08.401 "rw_mbytes_per_sec": 0, 00:08:08.401 "r_mbytes_per_sec": 0, 00:08:08.401 "w_mbytes_per_sec": 0 00:08:08.401 }, 00:08:08.401 "claimed": false, 00:08:08.401 "zoned": false, 00:08:08.401 "supported_io_types": { 00:08:08.401 "read": true, 00:08:08.401 "write": true, 00:08:08.401 "unmap": true, 00:08:08.401 "flush": true, 00:08:08.401 "reset": true, 00:08:08.401 "nvme_admin": true, 00:08:08.401 "nvme_io": true, 00:08:08.401 "nvme_io_md": false, 00:08:08.401 "write_zeroes": true, 00:08:08.401 "zcopy": false, 00:08:08.401 "get_zone_info": false, 00:08:08.401 "zone_management": false, 00:08:08.401 "zone_append": false, 00:08:08.401 "compare": true, 00:08:08.401 "compare_and_write": true, 00:08:08.401 "abort": true, 00:08:08.401 "seek_hole": false, 00:08:08.401 "seek_data": false, 00:08:08.401 "copy": true, 00:08:08.401 "nvme_iov_md": false 00:08:08.401 }, 00:08:08.401 "memory_domains": [ 00:08:08.401 { 00:08:08.401 "dma_device_id": "system", 00:08:08.401 "dma_device_type": 1 00:08:08.401 } 00:08:08.401 ], 00:08:08.401 "driver_specific": { 00:08:08.401 "nvme": [ 00:08:08.401 { 00:08:08.401 "trid": { 00:08:08.401 "trtype": "TCP", 00:08:08.401 "adrfam": "IPv4", 00:08:08.401 "traddr": "10.0.0.2", 00:08:08.401 "trsvcid": "4420", 00:08:08.401 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:08.401 }, 00:08:08.401 "ctrlr_data": { 00:08:08.401 "cntlid": 1, 00:08:08.401 "vendor_id": "0x8086", 00:08:08.401 "model_number": "SPDK bdev Controller", 00:08:08.401 "serial_number": "SPDK0", 00:08:08.401 "firmware_revision": "24.09", 00:08:08.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:08.401 "oacs": { 00:08:08.401 "security": 0, 00:08:08.401 "format": 0, 00:08:08.401 "firmware": 0, 00:08:08.401 "ns_manage": 0 00:08:08.401 }, 00:08:08.401 "multi_ctrlr": true, 00:08:08.401 "ana_reporting": false 00:08:08.401 }, 00:08:08.401 "vs": { 00:08:08.401 "nvme_version": "1.3" 00:08:08.401 }, 00:08:08.401 "ns_data": { 00:08:08.401 "id": 1, 00:08:08.401 "can_share": true 00:08:08.401 } 00:08:08.401 } 00:08:08.401 ], 00:08:08.401 "mp_policy": "active_passive" 00:08:08.401 } 00:08:08.401 } 00:08:08.401 ] 00:08:08.401 12:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66109 00:08:08.401 12:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:08.401 12:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:08.401 Running I/O for 10 seconds... 00:08:09.340 Latency(us) 00:08:09.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.340 Nvme0n1 : 1.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:08:09.340 =================================================================================================================== 00:08:09.340 Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:08:09.340 00:08:10.285 12:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:10.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.542 Nvme0n1 : 2.00 7810.50 30.51 0.00 0.00 0.00 0.00 0.00 00:08:10.542 =================================================================================================================== 00:08:10.542 Total : 7810.50 30.51 0.00 0.00 0.00 0.00 0.00 00:08:10.542 00:08:10.542 true 00:08:10.542 12:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:10.542 12:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:10.799 12:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:10.799 12:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:10.799 12:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66109 00:08:11.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.364 Nvme0n1 : 3.00 7789.33 30.43 0.00 0.00 0.00 0.00 0.00 00:08:11.364 =================================================================================================================== 00:08:11.364 Total : 7789.33 30.43 0.00 0.00 0.00 0.00 0.00 00:08:11.364 00:08:12.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.735 Nvme0n1 : 4.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:12.735 =================================================================================================================== 00:08:12.735 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:12.735 00:08:13.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.668 Nvme0n1 : 5.00 7696.20 30.06 0.00 0.00 0.00 0.00 0.00 00:08:13.668 =================================================================================================================== 00:08:13.668 Total : 7696.20 30.06 0.00 0.00 0.00 0.00 0.00 00:08:13.668 00:08:14.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.600 Nvme0n1 : 6.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:14.600 =================================================================================================================== 00:08:14.600 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:14.600 00:08:15.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.531 Nvme0n1 : 7.00 7665.43 29.94 0.00 0.00 0.00 0.00 0.00 00:08:15.531 =================================================================================================================== 00:08:15.531 Total : 7665.43 29.94 0.00 0.00 0.00 0.00 0.00 00:08:15.531 00:08:16.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.466 Nvme0n1 : 8.00 7612.12 29.73 0.00 0.00 0.00 0.00 0.00 00:08:16.466 =================================================================================================================== 00:08:16.466 Total : 7612.12 29.73 0.00 0.00 0.00 0.00 0.00 00:08:16.466 00:08:17.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.420 Nvme0n1 : 9.00 7570.67 29.57 0.00 0.00 0.00 0.00 0.00 00:08:17.420 =================================================================================================================== 00:08:17.420 Total : 7570.67 29.57 0.00 0.00 0.00 0.00 0.00 00:08:17.420 00:08:18.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.351 Nvme0n1 : 10.00 7537.50 29.44 0.00 0.00 0.00 0.00 0.00 00:08:18.351 =================================================================================================================== 00:08:18.351 Total : 7537.50 29.44 0.00 0.00 0.00 0.00 0.00 00:08:18.351 00:08:18.351 00:08:18.351 Latency(us) 00:08:18.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.351 Nvme0n1 : 10.02 7537.49 29.44 0.00 0.00 16976.27 10485.76 36938.47 00:08:18.351 =================================================================================================================== 00:08:18.351 Total : 7537.49 29.44 0.00 0.00 16976.27 10485.76 36938.47 00:08:18.351 0 00:08:18.608 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66081 00:08:18.608 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66081 ']' 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66081 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66081 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:18.609 killing process with pid 66081 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66081' 00:08:18.609 Received shutdown signal, test time was about 10.000000 seconds 00:08:18.609 00:08:18.609 Latency(us) 00:08:18.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.609 =================================================================================================================== 00:08:18.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66081 00:08:18.609 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66081 00:08:18.866 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.123 12:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:19.384 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:19.384 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65722 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65722 00:08:19.642 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65722 Killed "${NVMF_APP[@]}" "$@" 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66243 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66243 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66243 ']' 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.642 12:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.642 [2024-07-15 12:50:35.579516] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:19.642 [2024-07-15 12:50:35.579639] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.899 [2024-07-15 12:50:35.723123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.899 [2024-07-15 12:50:35.822051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.899 [2024-07-15 12:50:35.822130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.899 [2024-07-15 12:50:35.822142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.899 [2024-07-15 12:50:35.822151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.899 [2024-07-15 12:50:35.822158] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.899 [2024-07-15 12:50:35.822188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.899 [2024-07-15 12:50:35.881376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:20.465 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.465 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:20.465 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.465 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.465 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.723 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.723 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:20.723 [2024-07-15 12:50:36.778274] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:20.723 [2024-07-15 12:50:36.778765] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:20.723 [2024-07-15 12:50:36.778974] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8350e380-589f-4172-886f-9017c8744c58 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8350e380-589f-4172-886f-9017c8744c58 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:20.982 12:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:21.239 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8350e380-589f-4172-886f-9017c8744c58 -t 2000 00:08:21.498 [ 00:08:21.498 { 00:08:21.498 "name": "8350e380-589f-4172-886f-9017c8744c58", 00:08:21.498 "aliases": [ 00:08:21.498 "lvs/lvol" 00:08:21.498 ], 00:08:21.498 "product_name": "Logical Volume", 00:08:21.498 "block_size": 4096, 00:08:21.498 "num_blocks": 38912, 00:08:21.498 "uuid": "8350e380-589f-4172-886f-9017c8744c58", 00:08:21.498 "assigned_rate_limits": { 00:08:21.498 "rw_ios_per_sec": 0, 00:08:21.498 "rw_mbytes_per_sec": 0, 00:08:21.498 "r_mbytes_per_sec": 0, 00:08:21.498 "w_mbytes_per_sec": 0 00:08:21.498 }, 00:08:21.498 "claimed": false, 00:08:21.498 "zoned": false, 00:08:21.498 "supported_io_types": { 00:08:21.498 "read": true, 00:08:21.498 "write": true, 00:08:21.498 "unmap": true, 00:08:21.498 "flush": false, 00:08:21.498 "reset": true, 00:08:21.498 "nvme_admin": false, 00:08:21.498 "nvme_io": false, 00:08:21.498 "nvme_io_md": false, 00:08:21.498 "write_zeroes": true, 00:08:21.498 "zcopy": false, 00:08:21.498 "get_zone_info": false, 00:08:21.498 "zone_management": false, 00:08:21.498 "zone_append": false, 00:08:21.498 "compare": false, 00:08:21.498 "compare_and_write": false, 00:08:21.498 "abort": false, 00:08:21.498 "seek_hole": true, 00:08:21.498 "seek_data": true, 00:08:21.498 "copy": false, 00:08:21.498 "nvme_iov_md": false 00:08:21.498 }, 00:08:21.498 "driver_specific": { 00:08:21.498 "lvol": { 00:08:21.498 "lvol_store_uuid": "32071464-e1aa-4935-94ac-8255cc4d20f0", 00:08:21.498 "base_bdev": "aio_bdev", 00:08:21.498 "thin_provision": false, 00:08:21.498 "num_allocated_clusters": 38, 00:08:21.498 "snapshot": false, 00:08:21.498 "clone": false, 00:08:21.498 "esnap_clone": false 00:08:21.498 } 00:08:21.498 } 00:08:21.498 } 00:08:21.498 ] 00:08:21.498 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:21.498 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:21.498 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:21.756 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:21.756 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:21.756 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:22.014 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:22.014 12:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.273 [2024-07-15 12:50:38.083629] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:22.273 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:22.613 request: 00:08:22.613 { 00:08:22.613 "uuid": "32071464-e1aa-4935-94ac-8255cc4d20f0", 00:08:22.613 "method": "bdev_lvol_get_lvstores", 00:08:22.613 "req_id": 1 00:08:22.613 } 00:08:22.613 Got JSON-RPC error response 00:08:22.613 response: 00:08:22.613 { 00:08:22.613 "code": -19, 00:08:22.613 "message": "No such device" 00:08:22.613 } 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.613 aio_bdev 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8350e380-589f-4172-886f-9017c8744c58 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8350e380-589f-4172-886f-9017c8744c58 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:22.613 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.872 12:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8350e380-589f-4172-886f-9017c8744c58 -t 2000 00:08:23.131 [ 00:08:23.131 { 00:08:23.131 "name": "8350e380-589f-4172-886f-9017c8744c58", 00:08:23.131 "aliases": [ 00:08:23.131 "lvs/lvol" 00:08:23.131 ], 00:08:23.131 "product_name": "Logical Volume", 00:08:23.131 "block_size": 4096, 00:08:23.131 "num_blocks": 38912, 00:08:23.131 "uuid": "8350e380-589f-4172-886f-9017c8744c58", 00:08:23.131 "assigned_rate_limits": { 00:08:23.131 "rw_ios_per_sec": 0, 00:08:23.131 "rw_mbytes_per_sec": 0, 00:08:23.131 "r_mbytes_per_sec": 0, 00:08:23.131 "w_mbytes_per_sec": 0 00:08:23.131 }, 00:08:23.131 "claimed": false, 00:08:23.131 "zoned": false, 00:08:23.131 "supported_io_types": { 00:08:23.131 "read": true, 00:08:23.131 "write": true, 00:08:23.131 "unmap": true, 00:08:23.131 "flush": false, 00:08:23.131 "reset": true, 00:08:23.131 "nvme_admin": false, 00:08:23.131 "nvme_io": false, 00:08:23.131 "nvme_io_md": false, 00:08:23.131 "write_zeroes": true, 00:08:23.131 "zcopy": false, 00:08:23.131 "get_zone_info": false, 00:08:23.131 "zone_management": false, 00:08:23.131 "zone_append": false, 00:08:23.131 "compare": false, 00:08:23.131 "compare_and_write": false, 00:08:23.131 "abort": false, 00:08:23.131 "seek_hole": true, 00:08:23.131 "seek_data": true, 00:08:23.131 "copy": false, 00:08:23.131 "nvme_iov_md": false 00:08:23.131 }, 00:08:23.131 "driver_specific": { 00:08:23.131 "lvol": { 00:08:23.131 "lvol_store_uuid": "32071464-e1aa-4935-94ac-8255cc4d20f0", 00:08:23.131 "base_bdev": "aio_bdev", 00:08:23.131 "thin_provision": false, 00:08:23.131 "num_allocated_clusters": 38, 00:08:23.131 "snapshot": false, 00:08:23.131 "clone": false, 00:08:23.131 "esnap_clone": false 00:08:23.131 } 00:08:23.131 } 00:08:23.131 } 00:08:23.131 ] 00:08:23.131 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:23.131 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:23.131 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:23.390 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:23.390 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:23.390 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.660 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.660 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8350e380-589f-4172-886f-9017c8744c58 00:08:23.920 12:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32071464-e1aa-4935-94ac-8255cc4d20f0 00:08:24.178 12:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.436 12:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.694 00:08:24.694 real 0m20.604s 00:08:24.694 user 0m43.356s 00:08:24.694 sys 0m8.072s 00:08:24.694 12:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.694 12:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.694 ************************************ 00:08:24.694 END TEST lvs_grow_dirty 00:08:24.694 ************************************ 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:24.953 nvmf_trace.0 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.953 12:50:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.210 rmmod nvme_tcp 00:08:25.210 rmmod nvme_fabrics 00:08:25.210 rmmod nvme_keyring 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66243 ']' 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66243 00:08:25.210 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66243 ']' 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66243 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66243 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:25.211 killing process with pid 66243 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66243' 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66243 00:08:25.211 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66243 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:25.527 00:08:25.527 real 0m41.476s 00:08:25.527 user 1m6.959s 00:08:25.527 sys 0m11.326s 00:08:25.527 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.528 12:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.528 ************************************ 00:08:25.528 END TEST nvmf_lvs_grow 00:08:25.528 ************************************ 00:08:25.528 12:50:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.528 12:50:41 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:25.528 12:50:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.528 12:50:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.528 12:50:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.528 ************************************ 00:08:25.528 START TEST nvmf_bdev_io_wait 00:08:25.528 ************************************ 00:08:25.528 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:25.837 * Looking for test storage... 00:08:25.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.837 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:25.838 Cannot find device "nvmf_tgt_br" 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:25.838 Cannot find device "nvmf_tgt_br2" 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:25.838 Cannot find device "nvmf_tgt_br" 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:25.838 Cannot find device "nvmf_tgt_br2" 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:25.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:25.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:25.838 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:26.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:26.098 00:08:26.098 --- 10.0.0.2 ping statistics --- 00:08:26.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.098 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:26.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:26.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:08:26.098 00:08:26.098 --- 10.0.0.3 ping statistics --- 00:08:26.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.098 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:26.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:26.098 00:08:26.098 --- 10.0.0.1 ping statistics --- 00:08:26.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.098 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66554 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66554 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66554 ']' 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.098 12:50:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.098 [2024-07-15 12:50:42.026050] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:26.098 [2024-07-15 12:50:42.026161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.357 [2024-07-15 12:50:42.163617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.357 [2024-07-15 12:50:42.273307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.357 [2024-07-15 12:50:42.273398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.357 [2024-07-15 12:50:42.273428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.357 [2024-07-15 12:50:42.273437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.357 [2024-07-15 12:50:42.273445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.357 [2024-07-15 12:50:42.273557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.357 [2024-07-15 12:50:42.274339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.357 [2024-07-15 12:50:42.274459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.357 [2024-07-15 12:50:42.274463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 [2024-07-15 12:50:43.112127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 [2024-07-15 12:50:43.129275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 Malloc0 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.293 [2024-07-15 12:50:43.198443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66589 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:27.293 { 00:08:27.293 "params": { 00:08:27.293 "name": "Nvme$subsystem", 00:08:27.293 "trtype": "$TEST_TRANSPORT", 00:08:27.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.293 "adrfam": "ipv4", 00:08:27.293 "trsvcid": "$NVMF_PORT", 00:08:27.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.293 "hdgst": ${hdgst:-false}, 00:08:27.293 "ddgst": ${ddgst:-false} 00:08:27.293 }, 00:08:27.293 "method": "bdev_nvme_attach_controller" 00:08:27.293 } 00:08:27.293 EOF 00:08:27.293 )") 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66591 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66594 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:27.293 { 00:08:27.293 "params": { 00:08:27.293 "name": "Nvme$subsystem", 00:08:27.293 "trtype": "$TEST_TRANSPORT", 00:08:27.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.293 "adrfam": "ipv4", 00:08:27.293 "trsvcid": "$NVMF_PORT", 00:08:27.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.293 "hdgst": ${hdgst:-false}, 00:08:27.293 "ddgst": ${ddgst:-false} 00:08:27.293 }, 00:08:27.293 "method": "bdev_nvme_attach_controller" 00:08:27.293 } 00:08:27.293 EOF 00:08:27.293 )") 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66597 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:27.293 { 00:08:27.293 "params": { 00:08:27.293 "name": "Nvme$subsystem", 00:08:27.293 "trtype": "$TEST_TRANSPORT", 00:08:27.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.293 "adrfam": "ipv4", 00:08:27.293 "trsvcid": "$NVMF_PORT", 00:08:27.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.293 "hdgst": ${hdgst:-false}, 00:08:27.293 "ddgst": ${ddgst:-false} 00:08:27.293 }, 00:08:27.293 "method": "bdev_nvme_attach_controller" 00:08:27.293 } 00:08:27.293 EOF 00:08:27.293 )") 00:08:27.293 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:27.294 { 00:08:27.294 "params": { 00:08:27.294 "name": "Nvme$subsystem", 00:08:27.294 "trtype": "$TEST_TRANSPORT", 00:08:27.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.294 "adrfam": "ipv4", 00:08:27.294 "trsvcid": "$NVMF_PORT", 00:08:27.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.294 "hdgst": ${hdgst:-false}, 00:08:27.294 "ddgst": ${ddgst:-false} 00:08:27.294 }, 00:08:27.294 "method": "bdev_nvme_attach_controller" 00:08:27.294 } 00:08:27.294 EOF 00:08:27.294 )") 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:27.294 "params": { 00:08:27.294 "name": "Nvme1", 00:08:27.294 "trtype": "tcp", 00:08:27.294 "traddr": "10.0.0.2", 00:08:27.294 "adrfam": "ipv4", 00:08:27.294 "trsvcid": "4420", 00:08:27.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.294 "hdgst": false, 00:08:27.294 "ddgst": false 00:08:27.294 }, 00:08:27.294 "method": "bdev_nvme_attach_controller" 00:08:27.294 }' 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:27.294 "params": { 00:08:27.294 "name": "Nvme1", 00:08:27.294 "trtype": "tcp", 00:08:27.294 "traddr": "10.0.0.2", 00:08:27.294 "adrfam": "ipv4", 00:08:27.294 "trsvcid": "4420", 00:08:27.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.294 "hdgst": false, 00:08:27.294 "ddgst": false 00:08:27.294 }, 00:08:27.294 "method": "bdev_nvme_attach_controller" 00:08:27.294 }' 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:27.294 "params": { 00:08:27.294 "name": "Nvme1", 00:08:27.294 "trtype": "tcp", 00:08:27.294 "traddr": "10.0.0.2", 00:08:27.294 "adrfam": "ipv4", 00:08:27.294 "trsvcid": "4420", 00:08:27.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.294 "hdgst": false, 00:08:27.294 "ddgst": false 00:08:27.294 }, 00:08:27.294 "method": "bdev_nvme_attach_controller" 00:08:27.294 }' 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:27.294 "params": { 00:08:27.294 "name": "Nvme1", 00:08:27.294 "trtype": "tcp", 00:08:27.294 "traddr": "10.0.0.2", 00:08:27.294 "adrfam": "ipv4", 00:08:27.294 "trsvcid": "4420", 00:08:27.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.294 "hdgst": false, 00:08:27.294 "ddgst": false 00:08:27.294 }, 00:08:27.294 "method": "bdev_nvme_attach_controller" 00:08:27.294 }' 00:08:27.294 12:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66589 00:08:27.294 [2024-07-15 12:50:43.265854] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:27.294 [2024-07-15 12:50:43.265956] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:27.294 [2024-07-15 12:50:43.268185] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:27.294 [2024-07-15 12:50:43.268274] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:27.294 [2024-07-15 12:50:43.287914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:27.294 [2024-07-15 12:50:43.288005] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:27.294 [2024-07-15 12:50:43.305548] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:27.294 [2024-07-15 12:50:43.305872] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:27.552 [2024-07-15 12:50:43.476843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.552 [2024-07-15 12:50:43.553488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.552 [2024-07-15 12:50:43.578003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:27.810 [2024-07-15 12:50:43.630948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.810 [2024-07-15 12:50:43.643837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.810 [2024-07-15 12:50:43.644906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:27.810 [2024-07-15 12:50:43.692534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.810 [2024-07-15 12:50:43.712772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.810 [2024-07-15 12:50:43.736160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.810 Running I/O for 1 seconds... 00:08:27.810 [2024-07-15 12:50:43.787371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.810 Running I/O for 1 seconds... 00:08:27.810 [2024-07-15 12:50:43.805913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:27.810 [2024-07-15 12:50:43.851462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.068 Running I/O for 1 seconds... 00:08:28.068 Running I/O for 1 seconds... 00:08:29.005 00:08:29.005 Latency(us) 00:08:29.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.005 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:29.005 Nvme1n1 : 1.00 175101.11 683.99 0.00 0.00 728.32 336.99 1146.88 00:08:29.005 =================================================================================================================== 00:08:29.005 Total : 175101.11 683.99 0.00 0.00 728.32 336.99 1146.88 00:08:29.005 00:08:29.005 Latency(us) 00:08:29.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.005 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:29.005 Nvme1n1 : 1.01 9009.89 35.19 0.00 0.00 14127.77 9115.46 20494.89 00:08:29.005 =================================================================================================================== 00:08:29.005 Total : 9009.89 35.19 0.00 0.00 14127.77 9115.46 20494.89 00:08:29.005 00:08:29.005 Latency(us) 00:08:29.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.005 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:29.005 Nvme1n1 : 1.01 8100.24 31.64 0.00 0.00 15722.83 5987.61 25022.84 00:08:29.005 =================================================================================================================== 00:08:29.005 Total : 8100.24 31.64 0.00 0.00 15722.83 5987.61 25022.84 00:08:29.005 00:08:29.005 Latency(us) 00:08:29.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.005 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:29.005 Nvme1n1 : 1.01 9033.08 35.29 0.00 0.00 14110.19 7328.12 26452.71 00:08:29.005 =================================================================================================================== 00:08:29.005 Total : 9033.08 35.29 0.00 0.00 14110.19 7328.12 26452.71 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66591 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66594 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66597 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.263 rmmod nvme_tcp 00:08:29.263 rmmod nvme_fabrics 00:08:29.263 rmmod nvme_keyring 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66554 ']' 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66554 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66554 ']' 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66554 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:29.263 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66554 00:08:29.522 killing process with pid 66554 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66554' 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66554 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66554 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.522 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.782 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:29.782 00:08:29.782 real 0m4.091s 00:08:29.782 user 0m17.660s 00:08:29.782 sys 0m2.296s 00:08:29.782 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.782 ************************************ 00:08:29.782 END TEST nvmf_bdev_io_wait 00:08:29.782 ************************************ 00:08:29.782 12:50:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 12:50:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.782 12:50:45 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:29.782 12:50:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.782 12:50:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.782 12:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 ************************************ 00:08:29.782 START TEST nvmf_queue_depth 00:08:29.782 ************************************ 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:29.782 * Looking for test storage... 00:08:29.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.782 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:29.783 Cannot find device "nvmf_tgt_br" 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.783 Cannot find device "nvmf_tgt_br2" 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:29.783 Cannot find device "nvmf_tgt_br" 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:29.783 Cannot find device "nvmf_tgt_br2" 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:29.783 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.041 12:50:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:30.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:08:30.041 00:08:30.041 --- 10.0.0.2 ping statistics --- 00:08:30.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.041 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:30.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:08:30.041 00:08:30.041 --- 10.0.0.3 ping statistics --- 00:08:30.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.041 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:30.041 00:08:30.041 --- 10.0.0.1 ping statistics --- 00:08:30.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.041 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.041 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66830 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66830 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66830 ']' 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.042 12:50:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.300 [2024-07-15 12:50:46.137928] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:30.300 [2024-07-15 12:50:46.138058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.300 [2024-07-15 12:50:46.282098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.559 [2024-07-15 12:50:46.389261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.559 [2024-07-15 12:50:46.389326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.559 [2024-07-15 12:50:46.389340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.559 [2024-07-15 12:50:46.389349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.559 [2024-07-15 12:50:46.389375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.559 [2024-07-15 12:50:46.389407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.559 [2024-07-15 12:50:46.445570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.127 [2024-07-15 12:50:47.134189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.127 Malloc0 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.127 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.386 [2024-07-15 12:50:47.204185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66862 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66862 /var/tmp/bdevperf.sock 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66862 ']' 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.386 12:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.386 [2024-07-15 12:50:47.267737] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:31.386 [2024-07-15 12:50:47.268134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66862 ] 00:08:31.386 [2024-07-15 12:50:47.411489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.646 [2024-07-15 12:50:47.535887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.646 [2024-07-15 12:50:47.595879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.610 12:50:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.610 12:50:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:32.610 12:50:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:32.610 12:50:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.610 12:50:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.610 NVMe0n1 00:08:32.610 12:50:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.610 12:50:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.610 Running I/O for 10 seconds... 00:08:42.580 00:08:42.580 Latency(us) 00:08:42.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.580 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:42.580 Verification LBA range: start 0x0 length 0x4000 00:08:42.580 NVMe0n1 : 10.07 7743.89 30.25 0.00 0.00 131669.60 12749.73 94848.47 00:08:42.580 =================================================================================================================== 00:08:42.580 Total : 7743.89 30.25 0.00 0.00 131669.60 12749.73 94848.47 00:08:42.580 0 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66862 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66862 ']' 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66862 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66862 00:08:42.580 killing process with pid 66862 00:08:42.580 Received shutdown signal, test time was about 10.000000 seconds 00:08:42.580 00:08:42.580 Latency(us) 00:08:42.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.580 =================================================================================================================== 00:08:42.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66862' 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66862 00:08:42.580 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66862 00:08:42.837 12:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:42.837 12:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:42.837 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.837 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.095 rmmod nvme_tcp 00:08:43.095 rmmod nvme_fabrics 00:08:43.095 rmmod nvme_keyring 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66830 ']' 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66830 00:08:43.095 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66830 ']' 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66830 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66830 00:08:43.096 killing process with pid 66830 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66830' 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66830 00:08:43.096 12:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66830 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:43.354 00:08:43.354 real 0m13.636s 00:08:43.354 user 0m23.540s 00:08:43.354 sys 0m2.371s 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.354 12:50:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 ************************************ 00:08:43.354 END TEST nvmf_queue_depth 00:08:43.354 ************************************ 00:08:43.354 12:50:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:43.354 12:50:59 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:43.354 12:50:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.354 12:50:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.354 12:50:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 ************************************ 00:08:43.354 START TEST nvmf_target_multipath 00:08:43.354 ************************************ 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:43.354 * Looking for test storage... 00:08:43.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.354 12:50:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.355 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:43.614 Cannot find device "nvmf_tgt_br" 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.614 Cannot find device "nvmf_tgt_br2" 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:43.614 Cannot find device "nvmf_tgt_br" 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:43.614 Cannot find device "nvmf_tgt_br2" 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:43.614 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.872 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.872 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.872 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.872 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.872 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:43.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:43.872 00:08:43.872 --- 10.0.0.2 ping statistics --- 00:08:43.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.872 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:43.872 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:43.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:43.872 00:08:43.872 --- 10.0.0.3 ping statistics --- 00:08:43.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.872 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:43.872 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:43.873 00:08:43.873 --- 10.0.0.1 ping statistics --- 00:08:43.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.873 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67179 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67179 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67179 ']' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.873 12:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.873 [2024-07-15 12:50:59.813139] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:43.873 [2024-07-15 12:50:59.813222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.131 [2024-07-15 12:50:59.955294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.131 [2024-07-15 12:51:00.056231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.131 [2024-07-15 12:51:00.056558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.131 [2024-07-15 12:51:00.056696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.131 [2024-07-15 12:51:00.056751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.131 [2024-07-15 12:51:00.056853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.131 [2024-07-15 12:51:00.057053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.131 [2024-07-15 12:51:00.057186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.131 [2024-07-15 12:51:00.058860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.131 [2024-07-15 12:51:00.058905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.131 [2024-07-15 12:51:00.112356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.065 12:51:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.065 12:51:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:08:45.065 12:51:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.065 12:51:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.065 12:51:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.065 12:51:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.065 12:51:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:45.065 [2024-07-15 12:51:01.090860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.323 12:51:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:45.323 Malloc0 00:08:45.582 12:51:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:45.840 12:51:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.840 12:51:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.099 [2024-07-15 12:51:02.086650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.099 12:51:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:46.358 [2024-07-15 12:51:02.390969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.358 12:51:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:46.617 12:51:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:46.617 12:51:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.617 12:51:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:46.617 12:51:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.617 12:51:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:46.617 12:51:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:49.301 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.302 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:49.302 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:49.302 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:49.302 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:49.302 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67274 00:08:49.302 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:49.302 12:51:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:49.302 [global] 00:08:49.302 thread=1 00:08:49.302 invalidate=1 00:08:49.302 rw=randrw 00:08:49.302 time_based=1 00:08:49.302 runtime=6 00:08:49.302 ioengine=libaio 00:08:49.302 direct=1 00:08:49.302 bs=4096 00:08:49.302 iodepth=128 00:08:49.302 norandommap=0 00:08:49.302 numjobs=1 00:08:49.302 00:08:49.302 verify_dump=1 00:08:49.302 verify_backlog=512 00:08:49.302 verify_state_save=0 00:08:49.302 do_verify=1 00:08:49.302 verify=crc32c-intel 00:08:49.302 [job0] 00:08:49.302 filename=/dev/nvme0n1 00:08:49.302 Could not set queue depth (nvme0n1) 00:08:49.302 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.302 fio-3.35 00:08:49.302 Starting 1 thread 00:08:49.868 12:51:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:50.126 12:51:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:50.384 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:50.641 12:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67274 00:08:55.902 00:08:55.902 job0: (groupid=0, jobs=1): err= 0: pid=67301: Mon Jul 15 12:51:11 2024 00:08:55.902 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(251MiB/6006msec) 00:08:55.902 slat (usec): min=3, max=5837, avg=55.23, stdev=219.25 00:08:55.903 clat (usec): min=1636, max=13884, avg=8171.70, stdev=1462.54 00:08:55.903 lat (usec): min=1646, max=13916, avg=8226.93, stdev=1466.86 00:08:55.903 clat percentiles (usec): 00:08:55.903 | 1.00th=[ 4228], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 7439], 00:08:55.903 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:08:55.903 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[11731], 00:08:55.903 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13304], 99.95th=[13435], 00:08:55.903 | 99.99th=[13698] 00:08:55.903 bw ( KiB/s): min= 7472, max=26360, per=52.45%, avg=22400.67, stdev=4998.99, samples=12 00:08:55.903 iops : min= 1868, max= 6590, avg=5600.17, stdev=1249.75, samples=12 00:08:55.903 write: IOPS=6072, BW=23.7MiB/s (24.9MB/s)(131MiB/5542msec); 0 zone resets 00:08:55.903 slat (usec): min=4, max=1511, avg=63.24, stdev=153.19 00:08:55.903 clat (usec): min=1637, max=13649, avg=7071.47, stdev=1294.73 00:08:55.903 lat (usec): min=1661, max=13674, avg=7134.70, stdev=1298.69 00:08:55.903 clat percentiles (usec): 00:08:55.903 | 1.00th=[ 3195], 5.00th=[ 4146], 10.00th=[ 5276], 20.00th=[ 6587], 00:08:55.903 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:08:55.903 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8455], 00:08:55.903 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12780], 99.95th=[13173], 00:08:55.903 | 99.99th=[13435] 00:08:55.903 bw ( KiB/s): min= 7584, max=26208, per=92.22%, avg=22400.00, stdev=4858.22, samples=12 00:08:55.903 iops : min= 1896, max= 6552, avg=5600.00, stdev=1214.56, samples=12 00:08:55.903 lat (msec) : 2=0.03%, 4=1.91%, 10=91.95%, 20=6.10% 00:08:55.903 cpu : usr=5.43%, sys=21.81%, ctx=5638, majf=0, minf=96 00:08:55.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:55.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.903 issued rwts: total=64129,33654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.903 00:08:55.903 Run status group 0 (all jobs): 00:08:55.903 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=251MiB (263MB), run=6006-6006msec 00:08:55.903 WRITE: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=131MiB (138MB), run=5542-5542msec 00:08:55.903 00:08:55.903 Disk stats (read/write): 00:08:55.903 nvme0n1: ios=63189/33022, merge=0/0, ticks=495716/219311, in_queue=715027, util=98.62% 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67375 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:55.903 12:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:55.903 [global] 00:08:55.903 thread=1 00:08:55.903 invalidate=1 00:08:55.903 rw=randrw 00:08:55.903 time_based=1 00:08:55.903 runtime=6 00:08:55.903 ioengine=libaio 00:08:55.903 direct=1 00:08:55.903 bs=4096 00:08:55.903 iodepth=128 00:08:55.903 norandommap=0 00:08:55.903 numjobs=1 00:08:55.903 00:08:55.903 verify_dump=1 00:08:55.903 verify_backlog=512 00:08:55.903 verify_state_save=0 00:08:55.903 do_verify=1 00:08:55.903 verify=crc32c-intel 00:08:55.903 [job0] 00:08:55.903 filename=/dev/nvme0n1 00:08:55.903 Could not set queue depth (nvme0n1) 00:08:55.903 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:55.903 fio-3.35 00:08:55.903 Starting 1 thread 00:08:56.859 12:51:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:56.859 12:51:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:57.118 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:57.376 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:57.634 12:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67375 00:09:01.887 00:09:01.887 job0: (groupid=0, jobs=1): err= 0: pid=67399: Mon Jul 15 12:51:17 2024 00:09:01.887 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(273MiB/6005msec) 00:09:01.887 slat (usec): min=2, max=5681, avg=42.13, stdev=188.21 00:09:01.887 clat (usec): min=598, max=16565, avg=7434.31, stdev=1923.65 00:09:01.887 lat (usec): min=610, max=16576, avg=7476.45, stdev=1938.83 00:09:01.887 clat percentiles (usec): 00:09:01.887 | 1.00th=[ 2868], 5.00th=[ 4146], 10.00th=[ 4883], 20.00th=[ 5735], 00:09:01.887 | 30.00th=[ 6718], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8029], 00:09:01.887 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[11207], 00:09:01.887 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13960], 99.95th=[14746], 00:09:01.887 | 99.99th=[15795] 00:09:01.887 bw ( KiB/s): min=11224, max=45152, per=54.87%, avg=25546.91, stdev=10148.17, samples=11 00:09:01.887 iops : min= 2806, max=11288, avg=6386.73, stdev=2537.04, samples=11 00:09:01.887 write: IOPS=7043, BW=27.5MiB/s (28.8MB/s)(150MiB/5447msec); 0 zone resets 00:09:01.887 slat (usec): min=4, max=4838, avg=53.69, stdev=135.23 00:09:01.887 clat (usec): min=1193, max=14842, avg=6320.24, stdev=1772.68 00:09:01.887 lat (usec): min=1220, max=14867, avg=6373.93, stdev=1786.99 00:09:01.887 clat percentiles (usec): 00:09:01.887 | 1.00th=[ 2442], 5.00th=[ 3228], 10.00th=[ 3687], 20.00th=[ 4424], 00:09:01.887 | 30.00th=[ 5211], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7242], 00:09:01.887 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8291], 00:09:01.887 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12780], 99.95th=[13173], 00:09:01.887 | 99.99th=[14615] 00:09:01.887 bw ( KiB/s): min=11256, max=44464, per=90.59%, avg=25522.91, stdev=9976.45, samples=11 00:09:01.887 iops : min= 2814, max=11116, avg=6380.73, stdev=2494.11, samples=11 00:09:01.887 lat (usec) : 750=0.02%, 1000=0.01% 00:09:01.887 lat (msec) : 2=0.19%, 4=7.64%, 10=87.92%, 20=4.22% 00:09:01.887 cpu : usr=6.04%, sys=23.93%, ctx=6119, majf=0, minf=133 00:09:01.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:01.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.887 issued rwts: total=69895,38366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.887 00:09:01.887 Run status group 0 (all jobs): 00:09:01.887 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=273MiB (286MB), run=6005-6005msec 00:09:01.887 WRITE: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=150MiB (157MB), run=5447-5447msec 00:09:01.887 00:09:01.888 Disk stats (read/write): 00:09:01.888 nvme0n1: ios=69049/37749, merge=0/0, ticks=487506/221518, in_queue=709024, util=98.68% 00:09:01.888 12:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:02.146 12:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.405 rmmod nvme_tcp 00:09:02.405 rmmod nvme_fabrics 00:09:02.405 rmmod nvme_keyring 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67179 ']' 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67179 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67179 ']' 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67179 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67179 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67179' 00:09:02.405 killing process with pid 67179 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67179 00:09:02.405 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67179 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:02.664 00:09:02.664 real 0m19.381s 00:09:02.664 user 1m12.968s 00:09:02.664 sys 0m9.601s 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.664 12:51:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:02.664 ************************************ 00:09:02.664 END TEST nvmf_target_multipath 00:09:02.664 ************************************ 00:09:02.924 12:51:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.924 12:51:18 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:02.924 12:51:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.924 12:51:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.924 12:51:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.924 ************************************ 00:09:02.924 START TEST nvmf_zcopy 00:09:02.924 ************************************ 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:02.924 * Looking for test storage... 00:09:02.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.924 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:02.925 Cannot find device "nvmf_tgt_br" 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:02.925 Cannot find device "nvmf_tgt_br2" 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:02.925 Cannot find device "nvmf_tgt_br" 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:02.925 Cannot find device "nvmf_tgt_br2" 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:02.925 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:03.183 12:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.183 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:03.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:03.184 00:09:03.184 --- 10.0.0.2 ping statistics --- 00:09:03.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.184 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:03.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:09:03.184 00:09:03.184 --- 10.0.0.3 ping statistics --- 00:09:03.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.184 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:03.184 00:09:03.184 --- 10.0.0.1 ping statistics --- 00:09:03.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.184 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67648 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67648 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67648 ']' 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.184 12:51:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.184 [2024-07-15 12:51:19.231462] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:03.184 [2024-07-15 12:51:19.231594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.442 [2024-07-15 12:51:19.368598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.442 [2024-07-15 12:51:19.498999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.442 [2024-07-15 12:51:19.499063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.442 [2024-07-15 12:51:19.499076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.442 [2024-07-15 12:51:19.499086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.442 [2024-07-15 12:51:19.499095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.442 [2024-07-15 12:51:19.499131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.701 [2024-07-15 12:51:19.557691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.273 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.274 [2024-07-15 12:51:20.259342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.274 [2024-07-15 12:51:20.275405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.274 malloc0 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:04.274 { 00:09:04.274 "params": { 00:09:04.274 "name": "Nvme$subsystem", 00:09:04.274 "trtype": "$TEST_TRANSPORT", 00:09:04.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.274 "adrfam": "ipv4", 00:09:04.274 "trsvcid": "$NVMF_PORT", 00:09:04.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.274 "hdgst": ${hdgst:-false}, 00:09:04.274 "ddgst": ${ddgst:-false} 00:09:04.274 }, 00:09:04.274 "method": "bdev_nvme_attach_controller" 00:09:04.274 } 00:09:04.274 EOF 00:09:04.274 )") 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:04.274 12:51:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:04.274 "params": { 00:09:04.274 "name": "Nvme1", 00:09:04.274 "trtype": "tcp", 00:09:04.274 "traddr": "10.0.0.2", 00:09:04.274 "adrfam": "ipv4", 00:09:04.274 "trsvcid": "4420", 00:09:04.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.274 "hdgst": false, 00:09:04.274 "ddgst": false 00:09:04.274 }, 00:09:04.274 "method": "bdev_nvme_attach_controller" 00:09:04.274 }' 00:09:04.532 [2024-07-15 12:51:20.370650] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:04.532 [2024-07-15 12:51:20.370779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67681 ] 00:09:04.532 [2024-07-15 12:51:20.511563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.791 [2024-07-15 12:51:20.641980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.791 [2024-07-15 12:51:20.708493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.791 Running I/O for 10 seconds... 00:09:16.999 00:09:16.999 Latency(us) 00:09:16.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.999 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:16.999 Verification LBA range: start 0x0 length 0x1000 00:09:16.999 Nvme1n1 : 10.02 6047.64 47.25 0.00 0.00 21096.73 3083.17 31457.28 00:09:16.999 =================================================================================================================== 00:09:16.999 Total : 6047.64 47.25 0.00 0.00 21096.73 3083.17 31457.28 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67797 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.999 { 00:09:16.999 "params": { 00:09:16.999 "name": "Nvme$subsystem", 00:09:16.999 "trtype": "$TEST_TRANSPORT", 00:09:16.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.999 "adrfam": "ipv4", 00:09:16.999 "trsvcid": "$NVMF_PORT", 00:09:16.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.999 "hdgst": ${hdgst:-false}, 00:09:16.999 "ddgst": ${ddgst:-false} 00:09:16.999 }, 00:09:16.999 "method": "bdev_nvme_attach_controller" 00:09:16.999 } 00:09:16.999 EOF 00:09:16.999 )") 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:16.999 [2024-07-15 12:51:31.070155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.070201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:16.999 12:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.999 "params": { 00:09:16.999 "name": "Nvme1", 00:09:16.999 "trtype": "tcp", 00:09:16.999 "traddr": "10.0.0.2", 00:09:16.999 "adrfam": "ipv4", 00:09:16.999 "trsvcid": "4420", 00:09:16.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.999 "hdgst": false, 00:09:16.999 "ddgst": false 00:09:16.999 }, 00:09:16.999 "method": "bdev_nvme_attach_controller" 00:09:16.999 }' 00:09:16.999 [2024-07-15 12:51:31.082100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.082129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.094109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.094143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.106118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.106154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.116639] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:16.999 [2024-07-15 12:51:31.116737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67797 ] 00:09:16.999 [2024-07-15 12:51:31.118107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.118133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.130110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.130142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.142109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.142155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.154114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.154159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.166118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.166164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.178125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.178155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.190128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.190158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.202133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.202163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.214136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.214167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.226139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.226169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.238164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.238196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.250161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.250191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.257513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.999 [2024-07-15 12:51:31.262179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.262213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.274188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.274230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.286167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.286200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.298169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.298201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.310180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.310212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.322193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.322233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.334183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.999 [2024-07-15 12:51:31.334217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.999 [2024-07-15 12:51:31.342182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.342215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.354205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.354247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.366210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.366250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.370452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.000 [2024-07-15 12:51:31.378189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.378218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.390213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.390246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.402224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.402261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.414223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.414261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.426226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.426263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.432867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.000 [2024-07-15 12:51:31.438214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.438246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.446215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.446248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.454215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.454247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.462214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.462244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.470231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.470267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.478244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.478283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.486243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.486280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.494246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.494282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.502247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.502284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.510252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.510289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.518258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.518293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.530341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.530405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 Running I/O for 5 seconds... 00:09:17.000 [2024-07-15 12:51:31.542308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.542344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.556135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.556174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.571227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.571267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.581117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.581154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.599314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.599394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.617502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.617561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.627936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.627986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.638942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.639008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.657435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.657510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.672122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.672174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.681350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.681404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.692872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.692936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.703927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.703979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.714588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.714626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.727510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.727560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.743302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.743369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.752711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.752751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.768024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.768073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.783918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.783967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.793625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.793661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.805108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.805150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.822508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.822559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.000 [2024-07-15 12:51:31.832459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.000 [2024-07-15 12:51:31.832500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.843754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.843803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.854441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.854491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.869362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.869430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.886581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.886619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.897059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.897096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.912076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.912124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.922470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.922509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.934394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.934444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.944926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.944963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.955865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.955920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.970784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.970846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.988243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.988308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:31.998351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:31.998407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.009625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.009673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.024865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.024915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.034955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.035004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.049788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.049840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.059714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.059755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.076405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.076447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.087027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.087068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.101621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.101673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.119675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.119740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.134755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.134809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.144555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.144595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.159718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.159756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.174353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.174413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.189642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.189679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.199101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.199139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.212689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.212758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.223801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.223876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.237618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.237668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.247591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.247631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.262459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.262497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.272257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.272294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.286220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.286277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.296609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.296658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.311111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.311148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.320975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.321013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.331947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.331988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.344521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.344558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.354640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.001 [2024-07-15 12:51:32.354677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.001 [2024-07-15 12:51:32.366244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.366282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.377160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.377200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.388292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.388352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.404329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.404381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.420644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.420709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.430878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.430927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.442467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.442512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.453564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.453601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.471689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.471729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.486031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.486070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.495234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.495274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.510811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.510851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.527121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.527160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.545259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.545298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.560219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.560257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.570577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.570614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.582451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.582491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.593704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.593742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.604631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.604673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.619330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.619380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.635842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.635882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.645784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.645821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.660715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.660754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.670991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.671030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.685524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.685561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.694730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.694766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.710298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.710336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.720567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.720605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.736155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.736193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.753309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.753348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.762760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.762798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.778553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.778599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.795597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.795654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.805764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.805813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.819989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.820048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.836215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.836253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.845567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.845607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.860646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.860688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.870675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.870715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.882294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.882343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.893147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.893185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.905757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.905795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.922500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.922537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-07-15 12:51:32.938936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-07-15 12:51:32.938977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:32.948185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:32.948223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:32.959920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:32.959957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:32.971116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:32.971159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:32.985807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:32.985864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:32.996040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:32.996090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:33.010990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:33.011045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:33.021611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:33.021648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:33.032506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:33.032544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:33.044944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:33.044999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-07-15 12:51:33.053987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-07-15 12:51:33.054042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.066983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.067037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.083081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.083134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.092211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.092271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.105302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.105339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.116258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.116303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.130155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.130210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.140336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.140386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.155343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.155427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.172743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.172841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.189928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.190005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.205992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.206069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.222302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.222383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.240162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.240228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.255191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.255271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.265995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.266039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.280076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.280118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.290552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.290605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.261 [2024-07-15 12:51:33.305469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.261 [2024-07-15 12:51:33.305514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.321518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.321553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.339083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.339121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.353397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.353437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.362224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.362279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.374325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.374376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.385155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.385195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.396327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.396387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.409303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.409345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.420244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.420292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.434954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.435005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.445000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.445052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.460202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.460260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.477722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.477790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.487949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.488004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.498855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.498910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.511793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.511847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.529608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.529662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.544542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.544603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.554105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.554149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.565350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.565401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.519 [2024-07-15 12:51:33.576498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.519 [2024-07-15 12:51:33.576541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.587004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.587057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.599112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.599176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.614970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.615027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.624932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.624999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.639852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.639909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.650407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.650455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.665008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.665061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.682294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.682390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.692280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.692397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.706872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.706911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.717400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.717439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.731470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.731509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.749322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.749398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.759727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.759778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.770833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.770871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.783642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.783682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.801389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.801427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.815830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.815869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.776 [2024-07-15 12:51:33.825421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.776 [2024-07-15 12:51:33.825456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.837243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.837278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.847729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.847766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.858387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.858427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.869682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.869731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.884817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.884881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.895065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.895104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.909621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.909661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.925824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.925878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.944206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.944246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.959120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.959178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.969100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.969155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.981441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.981495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:33.997583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:33.997632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:34.007723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:34.007779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:34.023002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:34.023057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:34.033500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:34.033555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:34.048563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:34.048601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.033 [2024-07-15 12:51:34.064677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.033 [2024-07-15 12:51:34.064733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.034 [2024-07-15 12:51:34.074807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.034 [2024-07-15 12:51:34.074861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.034 [2024-07-15 12:51:34.086355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.034 [2024-07-15 12:51:34.086431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.101292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.101349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.116386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.116428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.126084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.126143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.141561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.141606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.151782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.151821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.166578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.166614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.176700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.176755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.192128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.192169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.210264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.210302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.220749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.220805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.235355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.235439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.253019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.253085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.263447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.263485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.274767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.274823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.285834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.285873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.293 [2024-07-15 12:51:34.296466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.293 [2024-07-15 12:51:34.296503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.294 [2024-07-15 12:51:34.307349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.294 [2024-07-15 12:51:34.307398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.294 [2024-07-15 12:51:34.324648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.294 [2024-07-15 12:51:34.324686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.294 [2024-07-15 12:51:34.341209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.294 [2024-07-15 12:51:34.341303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.294 [2024-07-15 12:51:34.350775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.294 [2024-07-15 12:51:34.350830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.366766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.366820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.383763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.383803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.400579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.400621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.411266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.411323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.425849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.425889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.443477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.443519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.459327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.459395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.477477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.477527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.492243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.492309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.507876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.507931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.525916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.525972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.536855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.536909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.547567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.547604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.558236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.558284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.570646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.570680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.580097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.580132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.593666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.593709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.553 [2024-07-15 12:51:34.608488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.553 [2024-07-15 12:51:34.608526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.618287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.618325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.634528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.634564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.645279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.645319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.660197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.660236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.677407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.677469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.687099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.687140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.698204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.698242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.710914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.710953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.727863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.727926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.743901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.743981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.761596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.761657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.771944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.772011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.782716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.782768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.799913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.799979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.817684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.817749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.828159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.828207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.838857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.838898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.850495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.850545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.859752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.859803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 [2024-07-15 12:51:34.871519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-07-15 12:51:34.871557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.883751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.883790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.893281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.893321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.906672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.906712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.917237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.917277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.928324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.928374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.938893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.938930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.953331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.953384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.963450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.963489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.978299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.978339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:34.995330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:34.995401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.004904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.004942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.020597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.020674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.038446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.038484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.048828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.048866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.063646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.063684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.081363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.081444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.091172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.091225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.106048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.106136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.116239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.116301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 [2024-07-15 12:51:35.130894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-07-15 12:51:35.130961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.146911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.146968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.156171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.156227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.169309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.169348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.180040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.180107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.190989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.191045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.201603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.201642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.214393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.214440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.231495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.231565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.248206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.248267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.264378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.264436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.282009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.282049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.296896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.296933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.306565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.306601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.322939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.322978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.341798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.341857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.356571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.356613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.365898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.365936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.381524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.381560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.332 [2024-07-15 12:51:35.391191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.332 [2024-07-15 12:51:35.391230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.591 [2024-07-15 12:51:35.406289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.591 [2024-07-15 12:51:35.406327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.591 [2024-07-15 12:51:35.415480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.591 [2024-07-15 12:51:35.415517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.431518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.431556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.450554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.450592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.461080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.461115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.473581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.473631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.492598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.492636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.506936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.506974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.516572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.516610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.527820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.527857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.541995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.542034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.551614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.551652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.565637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.565680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.574946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.574983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.591464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.591502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.608131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.608180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.626755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.626811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.637235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.637281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.592 [2024-07-15 12:51:35.647922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.592 [2024-07-15 12:51:35.647967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.850 [2024-07-15 12:51:35.664896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.850 [2024-07-15 12:51:35.664937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.682240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.682278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.698123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.698161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.706992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.707029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.719852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.719890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.735217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.735255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.744412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.744449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.755775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.755814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.766065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.766104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.780802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.780839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.790499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.790537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.806193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.806231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.816030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.816069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.830968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.831011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.848953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.849010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.863340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.863408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.872881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.872919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.884764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.884803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.898928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.898966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.851 [2024-07-15 12:51:35.908724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.851 [2024-07-15 12:51:35.908762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:35.923116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:35.923153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:35.938910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:35.938948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:35.947911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:35.947949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:35.960905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:35.960943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:35.976745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:35.976784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:35.986016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:35.986055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.002531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.002570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.012592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.012632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.027676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.027716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.045496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.045535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.059883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.059922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.077395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.077435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.087927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.087969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.098613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.098664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.109460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.109501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.122044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.122085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.131130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.131167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.144605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-07-15 12:51:36.144645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-07-15 12:51:36.155064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.109 [2024-07-15 12:51:36.155105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.169633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.169672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.187263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.187303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.197391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.197429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.208233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.208283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.220707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.220746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.230219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.230257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.242812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.242850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.259283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.259322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.277494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.277543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.292168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.292226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.302307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.302370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.317892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.317951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.334244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.334335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.350277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.350427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.367028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.367092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.384361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.384439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.399637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.399699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-07-15 12:51:36.416830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-07-15 12:51:36.416911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.432922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.432991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.449525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.449564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.466392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.466445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.476405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.476441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.487850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.487901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.498814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.498868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.511344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.511415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.521167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.521222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.534121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.534178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.547863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.547932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 00:09:20.627 Latency(us) 00:09:20.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.627 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:20.627 Nvme1n1 : 5.01 11654.03 91.05 0.00 0.00 10969.05 2487.39 18111.77 00:09:20.627 =================================================================================================================== 00:09:20.627 Total : 11654.03 91.05 0.00 0.00 10969.05 2487.39 18111.77 00:09:20.627 [2024-07-15 12:51:36.557466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.557527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.569468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.569521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.577471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.577523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.589491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.589548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.597475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.597526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.605482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.605536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.617508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.617569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.625497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.625563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.633505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.633559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.641501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.641552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.649518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.649576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.661535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.661617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.669539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.669576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.627 [2024-07-15 12:51:36.677538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.627 [2024-07-15 12:51:36.677591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.685511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.685563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.693521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.693555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.701519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.701569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.713577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.713630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.721541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.721579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.733544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.733582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.741543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.741595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.749534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.749586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.761563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.761624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.769538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.769570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 [2024-07-15 12:51:36.781565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.887 [2024-07-15 12:51:36.781602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.887 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67797) - No such process 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67797 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.887 delay0 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.887 12:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:21.146 [2024-07-15 12:51:36.978538] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:27.711 Initializing NVMe Controllers 00:09:27.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:27.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:27.711 Initialization complete. Launching workers. 00:09:27.711 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 844 00:09:27.711 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1131, failed to submit 33 00:09:27.711 success 1008, unsuccess 123, failed 0 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.711 rmmod nvme_tcp 00:09:27.711 rmmod nvme_fabrics 00:09:27.711 rmmod nvme_keyring 00:09:27.711 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67648 ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67648 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67648 ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67648 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67648 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:27.712 killing process with pid 67648 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67648' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67648 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67648 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:27.712 00:09:27.712 real 0m24.800s 00:09:27.712 user 0m40.856s 00:09:27.712 sys 0m6.687s 00:09:27.712 ************************************ 00:09:27.712 END TEST nvmf_zcopy 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.712 12:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.712 ************************************ 00:09:27.712 12:51:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.712 12:51:43 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.712 12:51:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.712 12:51:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.712 12:51:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.712 ************************************ 00:09:27.712 START TEST nvmf_nmic 00:09:27.712 ************************************ 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.712 * Looking for test storage... 00:09:27.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:27.712 Cannot find device "nvmf_tgt_br" 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.712 Cannot find device "nvmf_tgt_br2" 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:27.712 Cannot find device "nvmf_tgt_br" 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:27.712 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:27.971 Cannot find device "nvmf_tgt_br2" 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.971 12:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:27.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:27.971 00:09:27.971 --- 10.0.0.2 ping statistics --- 00:09:27.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.971 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:27.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:27.971 00:09:27.971 --- 10.0.0.3 ping statistics --- 00:09:27.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.971 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:27.971 00:09:27.971 --- 10.0.0.1 ping statistics --- 00:09:27.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.971 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.971 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68122 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68122 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68122 ']' 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.229 12:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.229 [2024-07-15 12:51:44.099343] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:28.229 [2024-07-15 12:51:44.099466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.229 [2024-07-15 12:51:44.242177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.487 [2024-07-15 12:51:44.378271] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.487 [2024-07-15 12:51:44.378786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.487 [2024-07-15 12:51:44.379083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.487 [2024-07-15 12:51:44.379386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.487 [2024-07-15 12:51:44.379606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.487 [2024-07-15 12:51:44.379997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.487 [2024-07-15 12:51:44.380118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.487 [2024-07-15 12:51:44.380441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.487 [2024-07-15 12:51:44.380497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.487 [2024-07-15 12:51:44.439173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.052 [2024-07-15 12:51:45.087601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.052 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 Malloc0 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 [2024-07-15 12:51:45.156102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.311 test case1: single bdev can't be used in multiple subsystems 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 [2024-07-15 12:51:45.179945] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:29.311 [2024-07-15 12:51:45.180084] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:29.311 [2024-07-15 12:51:45.180179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.311 request: 00:09:29.311 { 00:09:29.311 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:29.311 "namespace": { 00:09:29.311 "bdev_name": "Malloc0", 00:09:29.311 "no_auto_visible": false 00:09:29.311 }, 00:09:29.311 "method": "nvmf_subsystem_add_ns", 00:09:29.311 "req_id": 1 00:09:29.311 } 00:09:29.311 Got JSON-RPC error response 00:09:29.311 response: 00:09:29.311 { 00:09:29.311 "code": -32602, 00:09:29.311 "message": "Invalid parameters" 00:09:29.311 } 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:29.311 Adding namespace failed - expected result. 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:29.311 test case2: host connect to nvmf target in multiple paths 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.311 [2024-07-15 12:51:45.192071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:29.311 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:29.568 12:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.568 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:29.568 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.568 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:29.568 12:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:31.463 12:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:31.463 12:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:31.463 12:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.464 12:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:31.464 12:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.464 12:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:31.464 12:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:31.464 [global] 00:09:31.464 thread=1 00:09:31.464 invalidate=1 00:09:31.464 rw=write 00:09:31.464 time_based=1 00:09:31.464 runtime=1 00:09:31.464 ioengine=libaio 00:09:31.464 direct=1 00:09:31.464 bs=4096 00:09:31.464 iodepth=1 00:09:31.464 norandommap=0 00:09:31.464 numjobs=1 00:09:31.464 00:09:31.464 verify_dump=1 00:09:31.464 verify_backlog=512 00:09:31.464 verify_state_save=0 00:09:31.464 do_verify=1 00:09:31.464 verify=crc32c-intel 00:09:31.464 [job0] 00:09:31.464 filename=/dev/nvme0n1 00:09:31.464 Could not set queue depth (nvme0n1) 00:09:31.721 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.721 fio-3.35 00:09:31.721 Starting 1 thread 00:09:33.097 00:09:33.097 job0: (groupid=0, jobs=1): err= 0: pid=68208: Mon Jul 15 12:51:48 2024 00:09:33.097 read: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec) 00:09:33.097 slat (nsec): min=12726, max=42840, avg=14232.31, stdev=2393.91 00:09:33.097 clat (usec): min=142, max=245, avg=177.99, stdev=14.45 00:09:33.097 lat (usec): min=156, max=258, avg=192.23, stdev=14.63 00:09:33.097 clat percentiles (usec): 00:09:33.097 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:33.097 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:09:33.097 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:09:33.097 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 229], 99.95th=[ 241], 00:09:33.097 | 99.99th=[ 245] 00:09:33.097 write: IOPS=3125, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1000msec); 0 zone resets 00:09:33.097 slat (usec): min=15, max=182, avg=20.32, stdev= 5.36 00:09:33.097 clat (usec): min=84, max=315, avg=107.56, stdev=11.99 00:09:33.097 lat (usec): min=104, max=373, avg=127.88, stdev=14.18 00:09:33.097 clat percentiles (usec): 00:09:33.097 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 99], 00:09:33.097 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:09:33.097 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 128], 00:09:33.097 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 182], 99.95th=[ 212], 00:09:33.097 | 99.99th=[ 314] 00:09:33.097 bw ( KiB/s): min=12288, max=12288, per=98.30%, avg=12288.00, stdev= 0.00, samples=1 00:09:33.097 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:33.097 lat (usec) : 100=12.51%, 250=87.48%, 500=0.02% 00:09:33.097 cpu : usr=2.30%, sys=8.30%, ctx=6198, majf=0, minf=2 00:09:33.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.097 issued rwts: total=3072,3125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.097 00:09:33.097 Run status group 0 (all jobs): 00:09:33.097 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1000-1000msec 00:09:33.098 WRITE: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1000-1000msec 00:09:33.098 00:09:33.098 Disk stats (read/write): 00:09:33.098 nvme0n1: ios=2634/3072, merge=0/0, ticks=485/350, in_queue=835, util=91.48% 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.098 rmmod nvme_tcp 00:09:33.098 rmmod nvme_fabrics 00:09:33.098 rmmod nvme_keyring 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68122 ']' 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68122 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68122 ']' 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68122 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68122 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.098 killing process with pid 68122 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68122' 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68122 00:09:33.098 12:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68122 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:33.357 00:09:33.357 real 0m5.657s 00:09:33.357 user 0m18.002s 00:09:33.357 sys 0m2.256s 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.357 12:51:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 ************************************ 00:09:33.357 END TEST nvmf_nmic 00:09:33.357 ************************************ 00:09:33.357 12:51:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.357 12:51:49 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.357 12:51:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.357 12:51:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.357 12:51:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 ************************************ 00:09:33.357 START TEST nvmf_fio_target 00:09:33.357 ************************************ 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.357 * Looking for test storage... 00:09:33.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.357 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.358 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:33.616 Cannot find device "nvmf_tgt_br" 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.616 Cannot find device "nvmf_tgt_br2" 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:33.616 Cannot find device "nvmf_tgt_br" 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:33.616 Cannot find device "nvmf_tgt_br2" 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:33.616 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.617 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:33.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:33.875 00:09:33.875 --- 10.0.0.2 ping statistics --- 00:09:33.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.875 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:33.875 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.875 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:33.875 00:09:33.875 --- 10.0.0.3 ping statistics --- 00:09:33.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.875 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:33.875 00:09:33.875 --- 10.0.0.1 ping statistics --- 00:09:33.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.875 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68392 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68392 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68392 ']' 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.875 12:51:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.875 [2024-07-15 12:51:49.796347] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:33.875 [2024-07-15 12:51:49.796929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.131 [2024-07-15 12:51:49.936236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.131 [2024-07-15 12:51:50.099279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.131 [2024-07-15 12:51:50.099354] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.131 [2024-07-15 12:51:50.099392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.131 [2024-07-15 12:51:50.099403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.131 [2024-07-15 12:51:50.099412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.131 [2024-07-15 12:51:50.099549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.131 [2024-07-15 12:51:50.100265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.131 [2024-07-15 12:51:50.100405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.131 [2024-07-15 12:51:50.100413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.131 [2024-07-15 12:51:50.156624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.116 12:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.116 12:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:35.116 12:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.116 12:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.116 12:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.116 12:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.116 12:51:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:35.116 [2024-07-15 12:51:51.158995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.375 12:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.633 12:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:35.633 12:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.923 12:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:35.923 12:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.181 12:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:36.181 12:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.439 12:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:36.439 12:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:36.698 12:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.957 12:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:36.957 12:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.215 12:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:37.215 12:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.472 12:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:37.472 12:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:37.730 12:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.988 12:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:37.988 12:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.247 12:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:38.247 12:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:38.505 12:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.763 [2024-07-15 12:51:54.651951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.763 12:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:39.021 12:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:39.279 12:51:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.279 12:51:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:39.279 12:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:39.279 12:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.279 12:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:39.279 12:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:39.279 12:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:41.811 12:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:41.811 12:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:41.811 12:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.811 12:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:41.811 12:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.811 12:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:41.811 12:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:41.811 [global] 00:09:41.811 thread=1 00:09:41.811 invalidate=1 00:09:41.811 rw=write 00:09:41.811 time_based=1 00:09:41.811 runtime=1 00:09:41.811 ioengine=libaio 00:09:41.811 direct=1 00:09:41.811 bs=4096 00:09:41.811 iodepth=1 00:09:41.811 norandommap=0 00:09:41.811 numjobs=1 00:09:41.811 00:09:41.811 verify_dump=1 00:09:41.811 verify_backlog=512 00:09:41.811 verify_state_save=0 00:09:41.811 do_verify=1 00:09:41.811 verify=crc32c-intel 00:09:41.811 [job0] 00:09:41.811 filename=/dev/nvme0n1 00:09:41.811 [job1] 00:09:41.811 filename=/dev/nvme0n2 00:09:41.811 [job2] 00:09:41.811 filename=/dev/nvme0n3 00:09:41.811 [job3] 00:09:41.811 filename=/dev/nvme0n4 00:09:41.811 Could not set queue depth (nvme0n1) 00:09:41.811 Could not set queue depth (nvme0n2) 00:09:41.811 Could not set queue depth (nvme0n3) 00:09:41.811 Could not set queue depth (nvme0n4) 00:09:41.811 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.811 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.811 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.811 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.811 fio-3.35 00:09:41.811 Starting 4 threads 00:09:42.743 00:09:42.743 job0: (groupid=0, jobs=1): err= 0: pid=68577: Mon Jul 15 12:51:58 2024 00:09:42.743 read: IOPS=1876, BW=7504KiB/s (7685kB/s)(7512KiB/1001msec) 00:09:42.743 slat (nsec): min=12660, max=58324, avg=17786.92, stdev=6086.91 00:09:42.743 clat (usec): min=166, max=883, avg=290.16, stdev=74.16 00:09:42.743 lat (usec): min=183, max=909, avg=307.95, stdev=77.40 00:09:42.743 clat percentiles (usec): 00:09:42.743 | 1.00th=[ 180], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 249], 00:09:42.743 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:09:42.743 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 375], 95.00th=[ 474], 00:09:42.743 | 99.00th=[ 523], 99.50th=[ 627], 99.90th=[ 783], 99.95th=[ 881], 00:09:42.743 | 99.99th=[ 881] 00:09:42.743 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:42.743 slat (usec): min=16, max=110, avg=23.46, stdev= 4.89 00:09:42.743 clat (usec): min=90, max=1843, avg=178.68, stdev=51.54 00:09:42.743 lat (usec): min=125, max=1866, avg=202.14, stdev=51.60 00:09:42.743 clat percentiles (usec): 00:09:42.743 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 137], 00:09:42.743 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:09:42.743 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 219], 00:09:42.743 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 445], 99.95th=[ 791], 00:09:42.743 | 99.99th=[ 1844] 00:09:42.743 bw ( KiB/s): min= 8175, max= 8175, per=23.96%, avg=8175.00, stdev= 0.00, samples=1 00:09:42.743 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:42.743 lat (usec) : 100=0.05%, 250=61.67%, 500=37.19%, 750=0.99%, 1000=0.08% 00:09:42.743 lat (msec) : 2=0.03% 00:09:42.743 cpu : usr=2.10%, sys=5.90%, ctx=3929, majf=0, minf=11 00:09:42.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.743 issued rwts: total=1878,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.743 job1: (groupid=0, jobs=1): err= 0: pid=68578: Mon Jul 15 12:51:58 2024 00:09:42.743 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:42.743 slat (nsec): min=8890, max=56700, avg=12642.57, stdev=3529.50 00:09:42.743 clat (usec): min=209, max=411, avg=244.85, stdev=15.86 00:09:42.743 lat (usec): min=222, max=425, avg=257.49, stdev=16.44 00:09:42.743 clat percentiles (usec): 00:09:42.743 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:09:42.743 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:09:42.743 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:09:42.743 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 314], 99.95th=[ 334], 00:09:42.743 | 99.99th=[ 412] 00:09:42.743 write: IOPS=2218, BW=8875KiB/s (9088kB/s)(8884KiB/1001msec); 0 zone resets 00:09:42.743 slat (usec): min=13, max=246, avg=20.77, stdev= 5.84 00:09:42.743 clat (usec): min=125, max=782, avg=189.01, stdev=22.33 00:09:42.743 lat (usec): min=148, max=805, avg=209.77, stdev=23.77 00:09:42.743 clat percentiles (usec): 00:09:42.743 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:09:42.743 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:09:42.743 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:09:42.743 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 433], 99.95th=[ 449], 00:09:42.743 | 99.99th=[ 783] 00:09:42.743 bw ( KiB/s): min= 8431, max= 8431, per=24.71%, avg=8431.00, stdev= 0.00, samples=1 00:09:42.743 iops : min= 2107, max= 2107, avg=2107.00, stdev= 0.00, samples=1 00:09:42.743 lat (usec) : 250=84.98%, 500=14.99%, 1000=0.02% 00:09:42.743 cpu : usr=2.30%, sys=5.40%, ctx=4275, majf=0, minf=5 00:09:42.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.743 issued rwts: total=2048,2221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.743 job2: (groupid=0, jobs=1): err= 0: pid=68579: Mon Jul 15 12:51:58 2024 00:09:42.743 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:42.743 slat (nsec): min=9031, max=56215, avg=13394.08, stdev=3366.31 00:09:42.743 clat (usec): min=175, max=318, avg=243.78, stdev=15.41 00:09:42.743 lat (usec): min=209, max=328, avg=257.18, stdev=15.93 00:09:42.743 clat percentiles (usec): 00:09:42.743 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:09:42.743 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:09:42.743 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:09:42.743 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 310], 00:09:42.743 | 99.99th=[ 318] 00:09:42.743 write: IOPS=2220, BW=8883KiB/s (9096kB/s)(8892KiB/1001msec); 0 zone resets 00:09:42.743 slat (nsec): min=11226, max=58368, avg=18974.35, stdev=6412.74 00:09:42.743 clat (usec): min=133, max=704, avg=190.96, stdev=20.82 00:09:42.743 lat (usec): min=158, max=722, avg=209.94, stdev=22.70 00:09:42.743 clat percentiles (usec): 00:09:42.743 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:09:42.743 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:09:42.743 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:09:42.743 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 510], 99.95th=[ 523], 00:09:42.743 | 99.99th=[ 701] 00:09:42.743 bw ( KiB/s): min= 8464, max= 8464, per=24.80%, avg=8464.00, stdev= 0.00, samples=1 00:09:42.743 iops : min= 2116, max= 2116, avg=2116.00, stdev= 0.00, samples=1 00:09:42.743 lat (usec) : 250=85.30%, 500=14.63%, 750=0.07% 00:09:42.743 cpu : usr=1.50%, sys=6.00%, ctx=4271, majf=0, minf=8 00:09:42.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.743 issued rwts: total=2048,2223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.743 job3: (groupid=0, jobs=1): err= 0: pid=68580: Mon Jul 15 12:51:58 2024 00:09:42.743 read: IOPS=1787, BW=7149KiB/s (7320kB/s)(7156KiB/1001msec) 00:09:42.743 slat (usec): min=13, max=454, avg=18.35, stdev=16.78 00:09:42.743 clat (usec): min=186, max=2614, avg=291.56, stdev=81.18 00:09:42.743 lat (usec): min=215, max=2654, avg=309.91, stdev=83.94 00:09:42.743 clat percentiles (usec): 00:09:42.743 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:09:42.743 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:09:42.744 | 70.00th=[ 289], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 392], 00:09:42.744 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 1450], 99.95th=[ 2606], 00:09:42.744 | 99.99th=[ 2606] 00:09:42.744 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:42.744 slat (usec): min=18, max=108, avg=25.42, stdev=10.12 00:09:42.744 clat (usec): min=98, max=915, avg=188.73, stdev=51.14 00:09:42.744 lat (usec): min=134, max=939, avg=214.16, stdev=57.18 00:09:42.744 clat percentiles (usec): 00:09:42.744 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 147], 00:09:42.744 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:09:42.744 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 227], 95.00th=[ 302], 00:09:42.744 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 578], 99.95th=[ 701], 00:09:42.744 | 99.99th=[ 914] 00:09:42.744 bw ( KiB/s): min= 8175, max= 8175, per=23.96%, avg=8175.00, stdev= 0.00, samples=1 00:09:42.744 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:42.744 lat (usec) : 100=0.03%, 250=56.40%, 500=43.32%, 750=0.16%, 1000=0.05% 00:09:42.744 lat (msec) : 2=0.03%, 4=0.03% 00:09:42.744 cpu : usr=1.60%, sys=6.60%, ctx=3847, majf=0, minf=11 00:09:42.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.744 issued rwts: total=1789,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.744 00:09:42.744 Run status group 0 (all jobs): 00:09:42.744 READ: bw=30.3MiB/s (31.8MB/s), 7149KiB/s-8184KiB/s (7320kB/s-8380kB/s), io=30.3MiB (31.8MB), run=1001-1001msec 00:09:42.744 WRITE: bw=33.3MiB/s (34.9MB/s), 8184KiB/s-8883KiB/s (8380kB/s-9096kB/s), io=33.4MiB (35.0MB), run=1001-1001msec 00:09:42.744 00:09:42.744 Disk stats (read/write): 00:09:42.744 nvme0n1: ios=1586/1907, merge=0/0, ticks=462/353, in_queue=815, util=88.28% 00:09:42.744 nvme0n2: ios=1696/2048, merge=0/0, ticks=406/381, in_queue=787, util=88.97% 00:09:42.744 nvme0n3: ios=1649/2048, merge=0/0, ticks=379/373, in_queue=752, util=89.27% 00:09:42.744 nvme0n4: ios=1536/1729, merge=0/0, ticks=456/353, in_queue=809, util=89.82% 00:09:42.744 12:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:42.744 [global] 00:09:42.744 thread=1 00:09:42.744 invalidate=1 00:09:42.744 rw=randwrite 00:09:42.744 time_based=1 00:09:42.744 runtime=1 00:09:42.744 ioengine=libaio 00:09:42.744 direct=1 00:09:42.744 bs=4096 00:09:42.744 iodepth=1 00:09:42.744 norandommap=0 00:09:42.744 numjobs=1 00:09:42.744 00:09:42.744 verify_dump=1 00:09:42.744 verify_backlog=512 00:09:42.744 verify_state_save=0 00:09:42.744 do_verify=1 00:09:42.744 verify=crc32c-intel 00:09:42.744 [job0] 00:09:42.744 filename=/dev/nvme0n1 00:09:42.744 [job1] 00:09:42.744 filename=/dev/nvme0n2 00:09:42.744 [job2] 00:09:42.744 filename=/dev/nvme0n3 00:09:42.744 [job3] 00:09:42.744 filename=/dev/nvme0n4 00:09:42.744 Could not set queue depth (nvme0n1) 00:09:42.744 Could not set queue depth (nvme0n2) 00:09:42.744 Could not set queue depth (nvme0n3) 00:09:42.744 Could not set queue depth (nvme0n4) 00:09:43.001 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.001 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.001 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.001 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.001 fio-3.35 00:09:43.001 Starting 4 threads 00:09:43.993 00:09:43.993 job0: (groupid=0, jobs=1): err= 0: pid=68633: Mon Jul 15 12:52:00 2024 00:09:43.993 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:43.993 slat (usec): min=12, max=153, avg=16.52, stdev= 7.20 00:09:43.993 clat (usec): min=143, max=1219, avg=254.21, stdev=46.08 00:09:43.993 lat (usec): min=157, max=1239, avg=270.72, stdev=47.42 00:09:43.993 clat percentiles (usec): 00:09:43.993 | 1.00th=[ 172], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 235], 00:09:43.993 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:09:43.993 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 318], 00:09:43.993 | 99.00th=[ 445], 99.50th=[ 474], 99.90th=[ 807], 99.95th=[ 922], 00:09:43.993 | 99.99th=[ 1221] 00:09:43.993 write: IOPS=2164, BW=8659KiB/s (8867kB/s)(8668KiB/1001msec); 0 zone resets 00:09:43.993 slat (usec): min=17, max=118, avg=23.92, stdev= 8.04 00:09:43.993 clat (usec): min=90, max=417, avg=178.10, stdev=24.18 00:09:43.993 lat (usec): min=110, max=476, avg=202.02, stdev=25.04 00:09:43.993 clat percentiles (usec): 00:09:43.993 | 1.00th=[ 112], 5.00th=[ 135], 10.00th=[ 155], 20.00th=[ 165], 00:09:43.993 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:09:43.993 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:09:43.993 | 99.00th=[ 231], 99.50th=[ 265], 99.90th=[ 388], 99.95th=[ 396], 00:09:43.993 | 99.99th=[ 416] 00:09:43.993 bw ( KiB/s): min= 8256, max= 8256, per=24.69%, avg=8256.00, stdev= 0.00, samples=1 00:09:43.993 iops : min= 2064, max= 2064, avg=2064.00, stdev= 0.00, samples=1 00:09:43.993 lat (usec) : 100=0.14%, 250=77.11%, 500=22.63%, 750=0.05%, 1000=0.05% 00:09:43.993 lat (msec) : 2=0.02% 00:09:43.993 cpu : usr=1.70%, sys=6.70%, ctx=4219, majf=0, minf=13 00:09:43.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.993 issued rwts: total=2048,2167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.993 job1: (groupid=0, jobs=1): err= 0: pid=68634: Mon Jul 15 12:52:00 2024 00:09:43.993 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:43.993 slat (nsec): min=12123, max=37191, avg=13976.22, stdev=2055.79 00:09:43.993 clat (usec): min=143, max=5272, avg=260.55, stdev=139.42 00:09:43.993 lat (usec): min=157, max=5294, avg=274.52, stdev=139.84 00:09:43.993 clat percentiles (usec): 00:09:43.993 | 1.00th=[ 204], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:09:43.993 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:09:43.993 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 322], 00:09:43.993 | 99.00th=[ 375], 99.50th=[ 461], 99.90th=[ 2008], 99.95th=[ 3359], 00:09:43.993 | 99.99th=[ 5276] 00:09:43.993 write: IOPS=2101, BW=8408KiB/s (8609kB/s)(8416KiB/1001msec); 0 zone resets 00:09:43.993 slat (nsec): min=17526, max=95551, avg=20498.26, stdev=4006.45 00:09:43.993 clat (usec): min=109, max=912, avg=184.26, stdev=32.44 00:09:43.993 lat (usec): min=128, max=931, avg=204.76, stdev=33.07 00:09:43.993 clat percentiles (usec): 00:09:43.993 | 1.00th=[ 121], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 172], 00:09:43.993 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:09:43.993 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:09:43.993 | 99.00th=[ 269], 99.50th=[ 318], 99.90th=[ 529], 99.95th=[ 709], 00:09:43.993 | 99.99th=[ 914] 00:09:43.994 bw ( KiB/s): min= 8592, max= 8592, per=25.70%, avg=8592.00, stdev= 0.00, samples=1 00:09:43.994 iops : min= 2148, max= 2148, avg=2148.00, stdev= 0.00, samples=1 00:09:43.994 lat (usec) : 250=74.40%, 500=25.41%, 750=0.10%, 1000=0.02% 00:09:43.994 lat (msec) : 4=0.05%, 10=0.02% 00:09:43.994 cpu : usr=1.70%, sys=5.60%, ctx=4161, majf=0, minf=9 00:09:43.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.994 issued rwts: total=2048,2104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.994 job2: (groupid=0, jobs=1): err= 0: pid=68635: Mon Jul 15 12:52:00 2024 00:09:43.994 read: IOPS=1812, BW=7249KiB/s (7423kB/s)(7256KiB/1001msec) 00:09:43.994 slat (nsec): min=12592, max=86820, avg=17330.90, stdev=5018.91 00:09:43.994 clat (usec): min=169, max=1666, avg=276.58, stdev=50.95 00:09:43.994 lat (usec): min=182, max=1680, avg=293.91, stdev=51.54 00:09:43.994 clat percentiles (usec): 00:09:43.994 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:09:43.994 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:09:43.994 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 322], 00:09:43.994 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 734], 99.95th=[ 1663], 00:09:43.994 | 99.99th=[ 1663] 00:09:43.994 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:43.994 slat (nsec): min=18109, max=89914, avg=25757.52, stdev=7476.16 00:09:43.994 clat (usec): min=103, max=785, avg=198.64, stdev=26.40 00:09:43.994 lat (usec): min=123, max=808, avg=224.40, stdev=26.19 00:09:43.994 clat percentiles (usec): 00:09:43.994 | 1.00th=[ 120], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 184], 00:09:43.994 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:09:43.994 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 237], 00:09:43.994 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 318], 99.95th=[ 334], 00:09:43.994 | 99.99th=[ 783] 00:09:43.994 bw ( KiB/s): min= 8192, max= 8192, per=24.50%, avg=8192.00, stdev= 0.00, samples=2 00:09:43.994 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:43.994 lat (usec) : 250=57.53%, 500=42.21%, 750=0.21%, 1000=0.03% 00:09:43.994 lat (msec) : 2=0.03% 00:09:43.994 cpu : usr=1.30%, sys=6.90%, ctx=3862, majf=0, minf=16 00:09:43.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.994 issued rwts: total=1814,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.994 job3: (groupid=0, jobs=1): err= 0: pid=68636: Mon Jul 15 12:52:00 2024 00:09:43.994 read: IOPS=1805, BW=7221KiB/s (7394kB/s)(7228KiB/1001msec) 00:09:43.994 slat (nsec): min=11820, max=39169, avg=14234.95, stdev=3277.98 00:09:43.994 clat (usec): min=164, max=2034, avg=277.72, stdev=50.88 00:09:43.994 lat (usec): min=177, max=2057, avg=291.95, stdev=51.60 00:09:43.994 clat percentiles (usec): 00:09:43.994 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:09:43.994 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:09:43.994 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 318], 00:09:43.994 | 99.00th=[ 412], 99.50th=[ 429], 99.90th=[ 537], 99.95th=[ 2040], 00:09:43.994 | 99.99th=[ 2040] 00:09:43.994 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:43.994 slat (nsec): min=17447, max=90219, avg=21534.07, stdev=4876.09 00:09:43.994 clat (usec): min=115, max=777, avg=205.83, stdev=31.35 00:09:43.994 lat (usec): min=134, max=802, avg=227.37, stdev=33.06 00:09:43.994 clat percentiles (usec): 00:09:43.994 | 1.00th=[ 131], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:09:43.994 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:09:43.994 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 239], 00:09:43.994 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 400], 99.95th=[ 494], 00:09:43.994 | 99.99th=[ 775] 00:09:43.994 bw ( KiB/s): min= 8192, max= 8192, per=24.50%, avg=8192.00, stdev= 0.00, samples=1 00:09:43.994 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:43.994 lat (usec) : 250=54.29%, 500=45.58%, 750=0.08%, 1000=0.03% 00:09:43.994 lat (msec) : 4=0.03% 00:09:43.994 cpu : usr=1.80%, sys=5.40%, ctx=3857, majf=0, minf=7 00:09:43.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.994 issued rwts: total=1807,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.994 00:09:43.994 Run status group 0 (all jobs): 00:09:43.994 READ: bw=30.1MiB/s (31.6MB/s), 7221KiB/s-8184KiB/s (7394kB/s-8380kB/s), io=30.1MiB (31.6MB), run=1001-1001msec 00:09:43.994 WRITE: bw=32.7MiB/s (34.2MB/s), 8184KiB/s-8659KiB/s (8380kB/s-8867kB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:09:43.994 00:09:43.994 Disk stats (read/write): 00:09:43.994 nvme0n1: ios=1629/2048, merge=0/0, ticks=435/385, in_queue=820, util=87.47% 00:09:43.994 nvme0n2: ios=1583/2048, merge=0/0, ticks=409/383, in_queue=792, util=86.35% 00:09:43.994 nvme0n3: ios=1536/1744, merge=0/0, ticks=436/376, in_queue=812, util=88.75% 00:09:43.994 nvme0n4: ios=1536/1732, merge=0/0, ticks=434/379, in_queue=813, util=89.59% 00:09:43.994 12:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:44.252 [global] 00:09:44.252 thread=1 00:09:44.252 invalidate=1 00:09:44.252 rw=write 00:09:44.252 time_based=1 00:09:44.252 runtime=1 00:09:44.252 ioengine=libaio 00:09:44.252 direct=1 00:09:44.252 bs=4096 00:09:44.252 iodepth=128 00:09:44.252 norandommap=0 00:09:44.252 numjobs=1 00:09:44.252 00:09:44.252 verify_dump=1 00:09:44.252 verify_backlog=512 00:09:44.252 verify_state_save=0 00:09:44.252 do_verify=1 00:09:44.252 verify=crc32c-intel 00:09:44.252 [job0] 00:09:44.252 filename=/dev/nvme0n1 00:09:44.252 [job1] 00:09:44.252 filename=/dev/nvme0n2 00:09:44.252 [job2] 00:09:44.252 filename=/dev/nvme0n3 00:09:44.252 [job3] 00:09:44.252 filename=/dev/nvme0n4 00:09:44.252 Could not set queue depth (nvme0n1) 00:09:44.252 Could not set queue depth (nvme0n2) 00:09:44.252 Could not set queue depth (nvme0n3) 00:09:44.252 Could not set queue depth (nvme0n4) 00:09:44.252 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.252 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.252 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.252 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.252 fio-3.35 00:09:44.252 Starting 4 threads 00:09:45.692 00:09:45.692 job0: (groupid=0, jobs=1): err= 0: pid=68700: Mon Jul 15 12:52:01 2024 00:09:45.692 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:45.692 slat (usec): min=4, max=5321, avg=83.18, stdev=370.62 00:09:45.692 clat (usec): min=8264, max=14690, avg=11241.22, stdev=713.43 00:09:45.692 lat (usec): min=9094, max=14703, avg=11324.41, stdev=619.37 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[10814], 20.00th=[10945], 00:09:45.692 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:09:45.692 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[12125], 00:09:45.692 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14615], 99.95th=[14746], 00:09:45.692 | 99.99th=[14746] 00:09:45.692 write: IOPS=5991, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1002msec); 0 zone resets 00:09:45.692 slat (usec): min=10, max=2489, avg=81.00, stdev=334.08 00:09:45.692 clat (usec): min=1147, max=12500, avg=10543.56, stdev=875.80 00:09:45.692 lat (usec): min=1169, max=12784, avg=10624.56, stdev=820.54 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[ 6521], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10421], 00:09:45.692 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:09:45.692 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11076], 95.00th=[11207], 00:09:45.692 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12387], 99.95th=[12387], 00:09:45.692 | 99.99th=[12518] 00:09:45.692 bw ( KiB/s): min=22432, max=24625, per=34.55%, avg=23528.50, stdev=1550.69, samples=2 00:09:45.692 iops : min= 5608, max= 6156, avg=5882.00, stdev=387.49, samples=2 00:09:45.692 lat (msec) : 2=0.15%, 4=0.11%, 10=4.00%, 20=95.74% 00:09:45.692 cpu : usr=5.79%, sys=14.99%, ctx=430, majf=0, minf=8 00:09:45.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:45.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.692 issued rwts: total=5632,6003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.692 job1: (groupid=0, jobs=1): err= 0: pid=68701: Mon Jul 15 12:52:01 2024 00:09:45.692 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:45.692 slat (usec): min=9, max=5623, avg=175.94, stdev=900.59 00:09:45.692 clat (usec): min=17237, max=25477, avg=23112.30, stdev=1011.26 00:09:45.692 lat (usec): min=21915, max=25490, avg=23288.23, stdev=445.21 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[17957], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:09:45.692 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:09:45.692 | 70.00th=[23462], 80.00th=[23462], 90.00th=[23725], 95.00th=[23987], 00:09:45.692 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:09:45.692 | 99.99th=[25560] 00:09:45.692 write: IOPS=2975, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:09:45.692 slat (usec): min=14, max=7550, avg=178.12, stdev=863.82 00:09:45.692 clat (usec): min=527, max=27909, avg=22375.02, stdev=2846.42 00:09:45.692 lat (usec): min=560, max=27932, avg=22553.13, stdev=2722.94 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[ 6194], 5.00th=[17695], 10.00th=[21627], 20.00th=[21890], 00:09:45.692 | 30.00th=[22152], 40.00th=[22152], 50.00th=[22414], 60.00th=[22676], 00:09:45.692 | 70.00th=[22676], 80.00th=[23200], 90.00th=[24511], 95.00th=[27395], 00:09:45.692 | 99.00th=[27919], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:09:45.692 | 99.99th=[27919] 00:09:45.692 bw ( KiB/s): min=12288, max=12288, per=18.04%, avg=12288.00, stdev= 0.00, samples=1 00:09:45.692 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:45.692 lat (usec) : 750=0.04% 00:09:45.692 lat (msec) : 10=0.58%, 20=4.04%, 50=95.34% 00:09:45.692 cpu : usr=1.60%, sys=8.30%, ctx=174, majf=0, minf=9 00:09:45.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:45.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.692 issued rwts: total=2560,2978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.692 job2: (groupid=0, jobs=1): err= 0: pid=68702: Mon Jul 15 12:52:01 2024 00:09:45.692 read: IOPS=5078, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1002msec) 00:09:45.692 slat (usec): min=8, max=2930, avg=95.32, stdev=446.52 00:09:45.692 clat (usec): min=741, max=13853, avg=12637.40, stdev=1051.96 00:09:45.692 lat (usec): min=3551, max=13877, avg=12732.72, stdev=953.81 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[ 6783], 5.00th=[11469], 10.00th=[12125], 20.00th=[12518], 00:09:45.692 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[12911], 00:09:45.692 | 70.00th=[13042], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:09:45.692 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:09:45.692 | 99.99th=[13829] 00:09:45.692 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:45.692 slat (usec): min=11, max=2779, avg=92.67, stdev=390.63 00:09:45.692 clat (usec): min=9196, max=13378, avg=12155.40, stdev=531.89 00:09:45.692 lat (usec): min=10018, max=13439, avg=12248.07, stdev=361.69 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[11731], 20.00th=[11863], 00:09:45.692 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:09:45.692 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12780], 00:09:45.692 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:09:45.692 | 99.99th=[13435] 00:09:45.692 bw ( KiB/s): min=20480, max=20521, per=30.10%, avg=20500.50, stdev=28.99, samples=2 00:09:45.692 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:45.692 lat (usec) : 750=0.01% 00:09:45.692 lat (msec) : 4=0.26%, 10=1.62%, 20=98.11% 00:09:45.692 cpu : usr=4.40%, sys=14.69%, ctx=324, majf=0, minf=15 00:09:45.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:45.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.692 issued rwts: total=5089,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.692 job3: (groupid=0, jobs=1): err= 0: pid=68703: Mon Jul 15 12:52:01 2024 00:09:45.692 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:09:45.692 slat (usec): min=5, max=5655, avg=176.80, stdev=901.53 00:09:45.692 clat (usec): min=17310, max=24302, avg=23042.74, stdev=963.61 00:09:45.692 lat (usec): min=22340, max=24316, avg=23219.54, stdev=349.13 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[17957], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:09:45.692 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23200], 00:09:45.692 | 70.00th=[23462], 80.00th=[23462], 90.00th=[23725], 95.00th=[23725], 00:09:45.692 | 99.00th=[23987], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249], 00:09:45.692 | 99.99th=[24249] 00:09:45.692 write: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1003msec); 0 zone resets 00:09:45.692 slat (usec): min=15, max=6361, avg=177.62, stdev=854.27 00:09:45.692 clat (usec): min=589, max=27888, avg=22489.68, stdev=2926.40 00:09:45.692 lat (usec): min=4612, max=27912, avg=22667.30, stdev=2807.15 00:09:45.692 clat percentiles (usec): 00:09:45.692 | 1.00th=[ 5407], 5.00th=[17957], 10.00th=[21890], 20.00th=[22152], 00:09:45.692 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22414], 60.00th=[22676], 00:09:45.692 | 70.00th=[22938], 80.00th=[23200], 90.00th=[24511], 95.00th=[27395], 00:09:45.692 | 99.00th=[27919], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:09:45.692 | 99.99th=[27919] 00:09:45.692 bw ( KiB/s): min=10504, max=12288, per=16.73%, avg=11396.00, stdev=1261.48, samples=2 00:09:45.692 iops : min= 2626, max= 3072, avg=2849.00, stdev=315.37, samples=2 00:09:45.692 lat (usec) : 750=0.02% 00:09:45.692 lat (msec) : 10=0.61%, 20=4.03%, 50=95.34% 00:09:45.692 cpu : usr=2.00%, sys=9.08%, ctx=175, majf=0, minf=15 00:09:45.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:45.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.692 issued rwts: total=2560,2977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.692 00:09:45.692 Run status group 0 (all jobs): 00:09:45.692 READ: bw=61.7MiB/s (64.7MB/s), 9.97MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=61.9MiB (64.9MB), run=1001-1003msec 00:09:45.692 WRITE: bw=66.5MiB/s (69.7MB/s), 11.6MiB/s-23.4MiB/s (12.2MB/s-24.5MB/s), io=66.7MiB (70.0MB), run=1001-1003msec 00:09:45.692 00:09:45.692 Disk stats (read/write): 00:09:45.692 nvme0n1: ios=5009/5120, merge=0/0, ticks=12150/11334, in_queue=23484, util=89.07% 00:09:45.692 nvme0n2: ios=2236/2560, merge=0/0, ticks=10947/11233, in_queue=22180, util=88.34% 00:09:45.693 nvme0n3: ios=4224/4608, merge=0/0, ticks=12173/11779, in_queue=23952, util=89.29% 00:09:45.693 nvme0n4: ios=2208/2560, merge=0/0, ticks=12124/13545, in_queue=25669, util=89.86% 00:09:45.693 12:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:45.693 [global] 00:09:45.693 thread=1 00:09:45.693 invalidate=1 00:09:45.693 rw=randwrite 00:09:45.693 time_based=1 00:09:45.693 runtime=1 00:09:45.693 ioengine=libaio 00:09:45.693 direct=1 00:09:45.693 bs=4096 00:09:45.693 iodepth=128 00:09:45.693 norandommap=0 00:09:45.693 numjobs=1 00:09:45.693 00:09:45.693 verify_dump=1 00:09:45.693 verify_backlog=512 00:09:45.693 verify_state_save=0 00:09:45.693 do_verify=1 00:09:45.693 verify=crc32c-intel 00:09:45.693 [job0] 00:09:45.693 filename=/dev/nvme0n1 00:09:45.693 [job1] 00:09:45.693 filename=/dev/nvme0n2 00:09:45.693 [job2] 00:09:45.693 filename=/dev/nvme0n3 00:09:45.693 [job3] 00:09:45.693 filename=/dev/nvme0n4 00:09:45.693 Could not set queue depth (nvme0n1) 00:09:45.693 Could not set queue depth (nvme0n2) 00:09:45.693 Could not set queue depth (nvme0n3) 00:09:45.693 Could not set queue depth (nvme0n4) 00:09:45.693 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.693 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.693 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.693 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.693 fio-3.35 00:09:45.693 Starting 4 threads 00:09:47.074 00:09:47.074 job0: (groupid=0, jobs=1): err= 0: pid=68757: Mon Jul 15 12:52:02 2024 00:09:47.074 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:47.074 slat (usec): min=6, max=17739, avg=185.63, stdev=1417.82 00:09:47.074 clat (usec): min=18397, max=39659, avg=24583.21, stdev=2532.83 00:09:47.074 lat (usec): min=18424, max=45184, avg=24768.85, stdev=2815.76 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[19530], 5.00th=[21890], 10.00th=[22676], 20.00th=[23462], 00:09:47.074 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:09:47.074 | 70.00th=[24511], 80.00th=[25035], 90.00th=[29230], 95.00th=[30016], 00:09:47.074 | 99.00th=[31327], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:09:47.074 | 99.99th=[39584] 00:09:47.074 write: IOPS=3055, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:47.074 slat (usec): min=11, max=15301, avg=163.99, stdev=1096.88 00:09:47.074 clat (usec): min=797, max=29869, avg=20929.97, stdev=3353.03 00:09:47.074 lat (usec): min=9095, max=29899, avg=21093.96, stdev=3202.93 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[17433], 20.00th=[20317], 00:09:47.074 | 30.00th=[20841], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:09:47.074 | 70.00th=[22414], 80.00th=[22414], 90.00th=[22676], 95.00th=[23987], 00:09:47.074 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29230], 99.95th=[29754], 00:09:47.074 | 99.99th=[29754] 00:09:47.074 bw ( KiB/s): min=11256, max=12263, per=16.86%, avg=11759.50, stdev=712.06, samples=2 00:09:47.074 iops : min= 2814, max= 3065, avg=2939.50, stdev=177.48, samples=2 00:09:47.074 lat (usec) : 1000=0.02% 00:09:47.074 lat (msec) : 10=1.01%, 20=9.52%, 50=89.45% 00:09:47.074 cpu : usr=2.19%, sys=8.96%, ctx=121, majf=0, minf=3 00:09:47.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:47.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.074 issued rwts: total=2560,3071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.074 job1: (groupid=0, jobs=1): err= 0: pid=68758: Mon Jul 15 12:52:02 2024 00:09:47.074 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:09:47.074 slat (usec): min=5, max=9507, avg=83.64, stdev=498.54 00:09:47.074 clat (usec): min=5080, max=20226, avg=11515.62, stdev=1641.48 00:09:47.074 lat (usec): min=5181, max=23603, avg=11599.27, stdev=1658.33 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[ 7242], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[10945], 00:09:47.074 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:47.074 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[14222], 00:09:47.074 | 99.00th=[18744], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:09:47.074 | 99.99th=[20317] 00:09:47.074 write: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(23.7MiB/1001msec); 0 zone resets 00:09:47.074 slat (usec): min=4, max=7495, avg=79.88, stdev=447.00 00:09:47.074 clat (usec): min=600, max=20175, avg=10208.81, stdev=1297.73 00:09:47.074 lat (usec): min=3145, max=20183, avg=10288.69, stdev=1239.06 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[ 4752], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[ 9765], 00:09:47.074 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:09:47.074 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:09:47.074 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14091], 99.95th=[14222], 00:09:47.074 | 99.99th=[20055] 00:09:47.074 bw ( KiB/s): min=24576, max=24576, per=35.23%, avg=24576.00, stdev= 0.00, samples=1 00:09:47.074 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:09:47.074 lat (usec) : 750=0.01% 00:09:47.074 lat (msec) : 4=0.29%, 10=18.80%, 20=80.83%, 50=0.08% 00:09:47.074 cpu : usr=4.20%, sys=15.50%, ctx=325, majf=0, minf=3 00:09:47.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:47.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.074 issued rwts: total=5632,6078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.074 job2: (groupid=0, jobs=1): err= 0: pid=68759: Mon Jul 15 12:52:02 2024 00:09:47.074 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:09:47.074 slat (usec): min=8, max=12191, avg=166.49, stdev=1055.18 00:09:47.074 clat (usec): min=14010, max=41691, avg=23874.94, stdev=3058.24 00:09:47.074 lat (usec): min=14040, max=47222, avg=24041.43, stdev=3034.00 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[15270], 5.00th=[18220], 10.00th=[22938], 20.00th=[23200], 00:09:47.074 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:09:47.074 | 70.00th=[24249], 80.00th=[24773], 90.00th=[26084], 95.00th=[26608], 00:09:47.074 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:47.074 | 99.99th=[41681] 00:09:47.074 write: IOPS=2995, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1004msec); 0 zone resets 00:09:47.074 slat (usec): min=13, max=22476, avg=183.14, stdev=1195.09 00:09:47.074 clat (usec): min=1204, max=36063, avg=21973.42, stdev=3172.92 00:09:47.074 lat (usec): min=9578, max=36094, avg=22156.56, stdev=3004.97 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[10421], 5.00th=[19792], 10.00th=[20317], 20.00th=[21103], 00:09:47.074 | 30.00th=[21627], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:09:47.074 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23200], 95.00th=[26346], 00:09:47.074 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:09:47.074 | 99.99th=[35914] 00:09:47.074 bw ( KiB/s): min=10744, max=12288, per=16.51%, avg=11516.00, stdev=1091.77, samples=2 00:09:47.074 iops : min= 2686, max= 3072, avg=2879.00, stdev=272.94, samples=2 00:09:47.074 lat (msec) : 2=0.02%, 10=0.25%, 20=6.86%, 50=92.87% 00:09:47.074 cpu : usr=2.49%, sys=9.47%, ctx=163, majf=0, minf=13 00:09:47.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:47.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.074 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.074 job3: (groupid=0, jobs=1): err= 0: pid=68760: Mon Jul 15 12:52:02 2024 00:09:47.074 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:09:47.074 slat (usec): min=9, max=9261, avg=92.10, stdev=533.48 00:09:47.074 clat (usec): min=7642, max=21061, avg=12727.23, stdev=1108.80 00:09:47.074 lat (usec): min=7657, max=23742, avg=12819.33, stdev=1161.63 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[ 8848], 5.00th=[11469], 10.00th=[12125], 20.00th=[12256], 00:09:47.074 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:09:47.074 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13829], 00:09:47.074 | 99.00th=[16450], 99.50th=[18482], 99.90th=[20055], 99.95th=[20055], 00:09:47.074 | 99.99th=[21103] 00:09:47.074 write: IOPS=5358, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1002msec); 0 zone resets 00:09:47.074 slat (usec): min=9, max=8540, avg=91.44, stdev=520.55 00:09:47.074 clat (usec): min=628, max=16165, avg=11509.12, stdev=1183.68 00:09:47.074 lat (usec): min=5448, max=16188, avg=11600.56, stdev=1081.72 00:09:47.074 clat percentiles (usec): 00:09:47.074 | 1.00th=[ 6456], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:09:47.074 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:09:47.074 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12256], 95.00th=[12780], 00:09:47.074 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16057], 99.95th=[16188], 00:09:47.074 | 99.99th=[16188] 00:09:47.074 bw ( KiB/s): min=20936, max=20936, per=30.02%, avg=20936.00, stdev= 0.00, samples=1 00:09:47.074 iops : min= 5234, max= 5234, avg=5234.00, stdev= 0.00, samples=1 00:09:47.074 lat (usec) : 750=0.01% 00:09:47.074 lat (msec) : 10=3.78%, 20=96.20%, 50=0.01% 00:09:47.074 cpu : usr=3.70%, sys=14.99%, ctx=251, majf=0, minf=4 00:09:47.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:47.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.074 issued rwts: total=5120,5369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.074 00:09:47.074 Run status group 0 (all jobs): 00:09:47.074 READ: bw=61.7MiB/s (64.7MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=62.0MiB (65.0MB), run=1001-1005msec 00:09:47.074 WRITE: bw=68.1MiB/s (71.4MB/s), 11.7MiB/s-23.7MiB/s (12.3MB/s-24.9MB/s), io=68.5MiB (71.8MB), run=1001-1005msec 00:09:47.074 00:09:47.074 Disk stats (read/write): 00:09:47.074 nvme0n1: ios=2224/2560, merge=0/0, ticks=51856/51561, in_queue=103417, util=88.49% 00:09:47.075 nvme0n2: ios=4874/5120, merge=0/0, ticks=52798/48532, in_queue=101330, util=87.91% 00:09:47.075 nvme0n3: ios=2116/2560, merge=0/0, ticks=48374/54190, in_queue=102564, util=89.14% 00:09:47.075 nvme0n4: ios=4342/4608, merge=0/0, ticks=52227/49156, in_queue=101383, util=89.70% 00:09:47.075 12:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:47.075 12:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:47.075 12:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68773 00:09:47.075 12:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:47.075 [global] 00:09:47.075 thread=1 00:09:47.075 invalidate=1 00:09:47.075 rw=read 00:09:47.075 time_based=1 00:09:47.075 runtime=10 00:09:47.075 ioengine=libaio 00:09:47.075 direct=1 00:09:47.075 bs=4096 00:09:47.075 iodepth=1 00:09:47.075 norandommap=1 00:09:47.075 numjobs=1 00:09:47.075 00:09:47.075 [job0] 00:09:47.075 filename=/dev/nvme0n1 00:09:47.075 [job1] 00:09:47.075 filename=/dev/nvme0n2 00:09:47.075 [job2] 00:09:47.075 filename=/dev/nvme0n3 00:09:47.075 [job3] 00:09:47.075 filename=/dev/nvme0n4 00:09:47.075 Could not set queue depth (nvme0n1) 00:09:47.075 Could not set queue depth (nvme0n2) 00:09:47.075 Could not set queue depth (nvme0n3) 00:09:47.075 Could not set queue depth (nvme0n4) 00:09:47.075 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.075 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.075 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.075 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.075 fio-3.35 00:09:47.075 Starting 4 threads 00:09:50.358 12:52:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:50.358 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=43823104, buflen=4096 00:09:50.358 fio: pid=68820, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:50.358 12:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:50.358 fio: pid=68816, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:50.358 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=49778688, buflen=4096 00:09:50.358 12:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.358 12:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:50.616 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=14467072, buflen=4096 00:09:50.616 fio: pid=68813, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:50.616 12:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.616 12:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:50.874 fio: pid=68814, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:50.874 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=22249472, buflen=4096 00:09:50.874 00:09:50.874 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68813: Mon Jul 15 12:52:06 2024 00:09:50.874 read: IOPS=5664, BW=22.1MiB/s (23.2MB/s)(77.8MiB/3516msec) 00:09:50.874 slat (usec): min=11, max=15837, avg=15.70, stdev=163.68 00:09:50.874 clat (usec): min=132, max=1836, avg=159.60, stdev=34.40 00:09:50.874 lat (usec): min=144, max=16035, avg=175.30, stdev=167.78 00:09:50.874 clat percentiles (usec): 00:09:50.874 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:09:50.874 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:09:50.874 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:09:50.874 | 99.00th=[ 204], 99.50th=[ 258], 99.90th=[ 553], 99.95th=[ 1037], 00:09:50.874 | 99.99th=[ 1631] 00:09:50.874 bw ( KiB/s): min=21696, max=23464, per=33.32%, avg=22710.33, stdev=783.51, samples=6 00:09:50.874 iops : min= 5424, max= 5866, avg=5677.50, stdev=195.89, samples=6 00:09:50.874 lat (usec) : 250=99.46%, 500=0.43%, 750=0.04%, 1000=0.02% 00:09:50.874 lat (msec) : 2=0.06% 00:09:50.874 cpu : usr=1.71%, sys=6.77%, ctx=19922, majf=0, minf=1 00:09:50.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 issued rwts: total=19917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.874 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68814: Mon Jul 15 12:52:06 2024 00:09:50.874 read: IOPS=5756, BW=22.5MiB/s (23.6MB/s)(85.2MiB/3790msec) 00:09:50.874 slat (usec): min=11, max=13524, avg=15.94, stdev=166.21 00:09:50.874 clat (usec): min=3, max=2462, avg=156.43, stdev=33.98 00:09:50.874 lat (usec): min=139, max=13702, avg=172.37, stdev=170.15 00:09:50.874 clat percentiles (usec): 00:09:50.874 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:09:50.874 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:09:50.874 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 176], 00:09:50.874 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 322], 99.95th=[ 603], 00:09:50.874 | 99.99th=[ 2008] 00:09:50.874 bw ( KiB/s): min=21960, max=23520, per=33.74%, avg=22996.71, stdev=623.80, samples=7 00:09:50.874 iops : min= 5490, max= 5880, avg=5749.00, stdev=156.07, samples=7 00:09:50.874 lat (usec) : 4=0.01%, 250=99.85%, 500=0.08%, 750=0.03% 00:09:50.874 lat (msec) : 2=0.02%, 4=0.01% 00:09:50.874 cpu : usr=1.85%, sys=6.70%, ctx=21828, majf=0, minf=1 00:09:50.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 issued rwts: total=21817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.874 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68816: Mon Jul 15 12:52:06 2024 00:09:50.874 read: IOPS=3757, BW=14.7MiB/s (15.4MB/s)(47.5MiB/3235msec) 00:09:50.874 slat (usec): min=8, max=15271, avg=15.79, stdev=159.99 00:09:50.874 clat (usec): min=141, max=2948, avg=249.14, stdev=52.81 00:09:50.874 lat (usec): min=153, max=15454, avg=264.93, stdev=167.70 00:09:50.874 clat percentiles (usec): 00:09:50.874 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 176], 20.00th=[ 243], 00:09:50.874 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:09:50.874 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:09:50.874 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 424], 99.95th=[ 717], 00:09:50.874 | 99.99th=[ 2114] 00:09:50.874 bw ( KiB/s): min=14459, max=14760, per=21.40%, avg=14587.50, stdev=120.27, samples=6 00:09:50.874 iops : min= 3614, max= 3690, avg=3646.67, stdev=30.27, samples=6 00:09:50.874 lat (usec) : 250=34.56%, 500=65.34%, 750=0.03% 00:09:50.874 lat (msec) : 2=0.02%, 4=0.02% 00:09:50.874 cpu : usr=1.52%, sys=4.64%, ctx=12159, majf=0, minf=1 00:09:50.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 issued rwts: total=12154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.874 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68820: Mon Jul 15 12:52:06 2024 00:09:50.874 read: IOPS=3628, BW=14.2MiB/s (14.9MB/s)(41.8MiB/2949msec) 00:09:50.874 slat (nsec): min=8370, max=49911, avg=13132.25, stdev=4192.60 00:09:50.874 clat (usec): min=188, max=2129, avg=261.07, stdev=33.24 00:09:50.874 lat (usec): min=206, max=2139, avg=274.20, stdev=33.73 00:09:50.874 clat percentiles (usec): 00:09:50.874 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:09:50.874 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:09:50.874 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 285], 00:09:50.874 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 371], 99.95th=[ 603], 00:09:50.874 | 99.99th=[ 2089] 00:09:50.874 bw ( KiB/s): min=14451, max=14768, per=21.38%, avg=14570.20, stdev=122.61, samples=5 00:09:50.874 iops : min= 3612, max= 3692, avg=3642.40, stdev=30.84, samples=5 00:09:50.874 lat (usec) : 250=22.74%, 500=77.18%, 750=0.03%, 1000=0.01% 00:09:50.874 lat (msec) : 2=0.02%, 4=0.02% 00:09:50.874 cpu : usr=1.32%, sys=4.27%, ctx=10700, majf=0, minf=1 00:09:50.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.874 issued rwts: total=10700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.874 00:09:50.874 Run status group 0 (all jobs): 00:09:50.874 READ: bw=66.6MiB/s (69.8MB/s), 14.2MiB/s-22.5MiB/s (14.9MB/s-23.6MB/s), io=252MiB (265MB), run=2949-3790msec 00:09:50.874 00:09:50.874 Disk stats (read/write): 00:09:50.874 nvme0n1: ios=19035/0, merge=0/0, ticks=3087/0, in_queue=3087, util=95.22% 00:09:50.874 nvme0n2: ios=20768/0, merge=0/0, ticks=3297/0, in_queue=3297, util=95.43% 00:09:50.874 nvme0n3: ios=11516/0, merge=0/0, ticks=2849/0, in_queue=2849, util=96.12% 00:09:50.874 nvme0n4: ios=10429/0, merge=0/0, ticks=2612/0, in_queue=2612, util=96.76% 00:09:50.875 12:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.875 12:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:51.133 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.133 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:51.391 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.391 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:51.649 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.649 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:51.909 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.909 12:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68773 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:52.168 nvmf hotplug test: fio failed as expected 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:52.168 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.428 rmmod nvme_tcp 00:09:52.428 rmmod nvme_fabrics 00:09:52.428 rmmod nvme_keyring 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68392 ']' 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68392 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68392 ']' 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68392 00:09:52.428 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68392 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:52.686 killing process with pid 68392 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68392' 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68392 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68392 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.686 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.945 12:52:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:52.945 00:09:52.945 real 0m19.463s 00:09:52.945 user 1m13.561s 00:09:52.945 sys 0m10.356s 00:09:52.945 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.945 12:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.945 ************************************ 00:09:52.945 END TEST nvmf_fio_target 00:09:52.945 ************************************ 00:09:52.945 12:52:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.945 12:52:08 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:52.945 12:52:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.945 12:52:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.945 12:52:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.945 ************************************ 00:09:52.945 START TEST nvmf_bdevio 00:09:52.945 ************************************ 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:52.945 * Looking for test storage... 00:09:52.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.945 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:52.946 Cannot find device "nvmf_tgt_br" 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.946 Cannot find device "nvmf_tgt_br2" 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:52.946 12:52:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:52.946 Cannot find device "nvmf_tgt_br" 00:09:52.946 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:52.946 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:53.205 Cannot find device "nvmf_tgt_br2" 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.205 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:53.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:53.206 00:09:53.206 --- 10.0.0.2 ping statistics --- 00:09:53.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.206 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:53.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:53.206 00:09:53.206 --- 10.0.0.3 ping statistics --- 00:09:53.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.206 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:53.206 00:09:53.206 --- 10.0.0.1 ping statistics --- 00:09:53.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.206 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.206 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69085 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69085 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69085 ']' 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.467 12:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.467 [2024-07-15 12:52:09.321808] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:53.467 [2024-07-15 12:52:09.321902] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.467 [2024-07-15 12:52:09.467908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.725 [2024-07-15 12:52:09.601569] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.725 [2024-07-15 12:52:09.601643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.725 [2024-07-15 12:52:09.601659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.725 [2024-07-15 12:52:09.601671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.725 [2024-07-15 12:52:09.601681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.725 [2024-07-15 12:52:09.601831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.725 [2024-07-15 12:52:09.601967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:53.725 [2024-07-15 12:52:09.602620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:53.725 [2024-07-15 12:52:09.602632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.725 [2024-07-15 12:52:09.664119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.292 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.292 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:54.292 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.292 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:54.292 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.552 [2024-07-15 12:52:10.372898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.552 Malloc0 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.552 [2024-07-15 12:52:10.440411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:54.552 { 00:09:54.552 "params": { 00:09:54.552 "name": "Nvme$subsystem", 00:09:54.552 "trtype": "$TEST_TRANSPORT", 00:09:54.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.552 "adrfam": "ipv4", 00:09:54.552 "trsvcid": "$NVMF_PORT", 00:09:54.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.552 "hdgst": ${hdgst:-false}, 00:09:54.552 "ddgst": ${ddgst:-false} 00:09:54.552 }, 00:09:54.552 "method": "bdev_nvme_attach_controller" 00:09:54.552 } 00:09:54.552 EOF 00:09:54.552 )") 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:54.552 12:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:54.552 "params": { 00:09:54.552 "name": "Nvme1", 00:09:54.552 "trtype": "tcp", 00:09:54.552 "traddr": "10.0.0.2", 00:09:54.552 "adrfam": "ipv4", 00:09:54.552 "trsvcid": "4420", 00:09:54.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.552 "hdgst": false, 00:09:54.552 "ddgst": false 00:09:54.552 }, 00:09:54.552 "method": "bdev_nvme_attach_controller" 00:09:54.552 }' 00:09:54.552 [2024-07-15 12:52:10.499283] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:54.552 [2024-07-15 12:52:10.499418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69121 ] 00:09:54.810 [2024-07-15 12:52:10.643715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:54.810 [2024-07-15 12:52:10.780996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.810 [2024-07-15 12:52:10.781133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.810 [2024-07-15 12:52:10.781140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.810 [2024-07-15 12:52:10.848394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:55.069 I/O targets: 00:09:55.069 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:55.069 00:09:55.069 00:09:55.069 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.069 http://cunit.sourceforge.net/ 00:09:55.069 00:09:55.069 00:09:55.069 Suite: bdevio tests on: Nvme1n1 00:09:55.069 Test: blockdev write read block ...passed 00:09:55.069 Test: blockdev write zeroes read block ...passed 00:09:55.069 Test: blockdev write zeroes read no split ...passed 00:09:55.069 Test: blockdev write zeroes read split ...passed 00:09:55.069 Test: blockdev write zeroes read split partial ...passed 00:09:55.069 Test: blockdev reset ...[2024-07-15 12:52:10.994708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:55.069 [2024-07-15 12:52:10.994809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc7c0 (9): Bad file descriptor 00:09:55.069 [2024-07-15 12:52:11.010836] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:55.069 passed 00:09:55.069 Test: blockdev write read 8 blocks ...passed 00:09:55.069 Test: blockdev write read size > 128k ...passed 00:09:55.069 Test: blockdev write read invalid size ...passed 00:09:55.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:55.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:55.069 Test: blockdev write read max offset ...passed 00:09:55.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:55.069 Test: blockdev writev readv 8 blocks ...passed 00:09:55.069 Test: blockdev writev readv 30 x 1block ...passed 00:09:55.069 Test: blockdev writev readv block ...passed 00:09:55.069 Test: blockdev writev readv size > 128k ...passed 00:09:55.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:55.069 Test: blockdev comparev and writev ...[2024-07-15 12:52:11.018749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.069 [2024-07-15 12:52:11.018896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:55.069 [2024-07-15 12:52:11.018989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.069 [2024-07-15 12:52:11.019064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:55.069 [2024-07-15 12:52:11.019563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.069 [2024-07-15 12:52:11.019673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.019752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.070 [2024-07-15 12:52:11.019818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.020248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.070 [2024-07-15 12:52:11.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.020481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.070 [2024-07-15 12:52:11.020563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.020963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.070 [2024-07-15 12:52:11.021061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.021138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.070 [2024-07-15 12:52:11.021214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:55.070 passed 00:09:55.070 Test: blockdev nvme passthru rw ...passed 00:09:55.070 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:52:11.022208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.070 [2024-07-15 12:52:11.022326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.022547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.070 [2024-07-15 12:52:11.022645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.022823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.070 [2024-07-15 12:52:11.022906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:55.070 [2024-07-15 12:52:11.023092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.070 [2024-07-15 12:52:11.023192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:55.070 passed 00:09:55.070 Test: blockdev nvme admin passthru ...passed 00:09:55.070 Test: blockdev copy ...passed 00:09:55.070 00:09:55.070 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.070 suites 1 1 n/a 0 0 00:09:55.070 tests 23 23 23 0 0 00:09:55.070 asserts 152 152 152 0 n/a 00:09:55.070 00:09:55.070 Elapsed time = 0.150 seconds 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.329 rmmod nvme_tcp 00:09:55.329 rmmod nvme_fabrics 00:09:55.329 rmmod nvme_keyring 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69085 ']' 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69085 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69085 ']' 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69085 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69085 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:55.329 killing process with pid 69085 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69085' 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69085 00:09:55.329 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69085 00:09:55.587 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.587 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.587 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.588 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.588 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.588 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.588 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.588 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.588 12:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:55.847 00:09:55.847 real 0m2.817s 00:09:55.847 user 0m9.387s 00:09:55.847 sys 0m0.785s 00:09:55.847 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.847 ************************************ 00:09:55.847 END TEST nvmf_bdevio 00:09:55.847 ************************************ 00:09:55.847 12:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.847 12:52:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:55.847 12:52:11 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:55.847 12:52:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:55.847 12:52:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.847 12:52:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:55.847 ************************************ 00:09:55.847 START TEST nvmf_auth_target 00:09:55.847 ************************************ 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:55.847 * Looking for test storage... 00:09:55.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.847 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:55.848 Cannot find device "nvmf_tgt_br" 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.848 Cannot find device "nvmf_tgt_br2" 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:55.848 Cannot find device "nvmf_tgt_br" 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:55.848 Cannot find device "nvmf_tgt_br2" 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:55.848 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:56.107 12:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:56.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:56.107 00:09:56.107 --- 10.0.0.2 ping statistics --- 00:09:56.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.107 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:56.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:56.107 00:09:56.107 --- 10.0.0.3 ping statistics --- 00:09:56.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.107 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:56.107 00:09:56.107 --- 10.0.0.1 ping statistics --- 00:09:56.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.107 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69289 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69289 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69289 ']' 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.107 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69327 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a6d7dc22ce31878f6ac483d1868a592f0205ba03d69b1431 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oxf 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a6d7dc22ce31878f6ac483d1868a592f0205ba03d69b1431 0 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a6d7dc22ce31878f6ac483d1868a592f0205ba03d69b1431 0 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a6d7dc22ce31878f6ac483d1868a592f0205ba03d69b1431 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oxf 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oxf 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.oxf 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f8b2a86f0881469dfbd5fbef6279054f2a95f5f82321a6018d3ad0c1c32cc9e1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oUr 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f8b2a86f0881469dfbd5fbef6279054f2a95f5f82321a6018d3ad0c1c32cc9e1 3 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f8b2a86f0881469dfbd5fbef6279054f2a95f5f82321a6018d3ad0c1c32cc9e1 3 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f8b2a86f0881469dfbd5fbef6279054f2a95f5f82321a6018d3ad0c1c32cc9e1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oUr 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oUr 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.oUr 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eb76b7688ce038f6bc01b665bd9701ef 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zrP 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eb76b7688ce038f6bc01b665bd9701ef 1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eb76b7688ce038f6bc01b665bd9701ef 1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eb76b7688ce038f6bc01b665bd9701ef 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zrP 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zrP 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.zrP 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=93693af56c00694935f6d625c915c3244d78ab854988a74e 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Mhv 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 93693af56c00694935f6d625c915c3244d78ab854988a74e 2 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 93693af56c00694935f6d625c915c3244d78ab854988a74e 2 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=93693af56c00694935f6d625c915c3244d78ab854988a74e 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Mhv 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Mhv 00:09:57.482 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Mhv 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ffd052a801a91e4b2c48fe316f6febad14be9519b2950f5b 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xYM 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ffd052a801a91e4b2c48fe316f6febad14be9519b2950f5b 2 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ffd052a801a91e4b2c48fe316f6febad14be9519b2950f5b 2 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ffd052a801a91e4b2c48fe316f6febad14be9519b2950f5b 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xYM 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xYM 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.xYM 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0204f0d987538f1cf956f7cb04db0a49 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TPr 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0204f0d987538f1cf956f7cb04db0a49 1 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0204f0d987538f1cf956f7cb04db0a49 1 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0204f0d987538f1cf956f7cb04db0a49 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:57.483 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TPr 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TPr 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.TPr 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6f1675004e58a0c854080a5b413016705b267e3928cd77590c8535c5a2d82cfa 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qqX 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6f1675004e58a0c854080a5b413016705b267e3928cd77590c8535c5a2d82cfa 3 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6f1675004e58a0c854080a5b413016705b267e3928cd77590c8535c5a2d82cfa 3 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6f1675004e58a0c854080a5b413016705b267e3928cd77590c8535c5a2d82cfa 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qqX 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qqX 00:09:57.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qqX 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69289 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69289 ']' 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.741 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69327 /var/tmp/host.sock 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69327 ']' 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:58.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.000 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.259 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oxf 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oxf 00:09:58.260 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oxf 00:09:58.519 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.oUr ]] 00:09:58.519 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oUr 00:09:58.519 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.519 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.519 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.519 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oUr 00:09:58.519 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oUr 00:09:58.778 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:58.778 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zrP 00:09:58.778 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.778 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.778 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.778 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.zrP 00:09:58.778 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.zrP 00:09:59.037 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Mhv ]] 00:09:59.037 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Mhv 00:09:59.037 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.037 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.037 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.037 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Mhv 00:09:59.037 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Mhv 00:09:59.296 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:59.296 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xYM 00:09:59.296 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.296 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.296 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.296 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xYM 00:09:59.296 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xYM 00:09:59.554 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.TPr ]] 00:09:59.554 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPr 00:09:59.554 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.554 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.554 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.554 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPr 00:09:59.555 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPr 00:09:59.814 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:59.814 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qqX 00:09:59.814 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.814 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.814 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.814 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qqX 00:09:59.814 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qqX 00:10:00.073 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:00.073 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:00.073 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:00.073 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:00.073 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:00.073 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.332 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.333 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.333 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.333 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.647 00:10:00.647 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:00.647 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.647 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:00.905 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.905 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.905 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.905 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.905 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.905 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:00.905 { 00:10:00.905 "cntlid": 1, 00:10:00.905 "qid": 0, 00:10:00.905 "state": "enabled", 00:10:00.905 "thread": "nvmf_tgt_poll_group_000", 00:10:00.905 "listen_address": { 00:10:00.905 "trtype": "TCP", 00:10:00.905 "adrfam": "IPv4", 00:10:00.905 "traddr": "10.0.0.2", 00:10:00.905 "trsvcid": "4420" 00:10:00.905 }, 00:10:00.905 "peer_address": { 00:10:00.905 "trtype": "TCP", 00:10:00.905 "adrfam": "IPv4", 00:10:00.905 "traddr": "10.0.0.1", 00:10:00.905 "trsvcid": "42456" 00:10:00.905 }, 00:10:00.905 "auth": { 00:10:00.905 "state": "completed", 00:10:00.905 "digest": "sha256", 00:10:00.905 "dhgroup": "null" 00:10:00.905 } 00:10:00.905 } 00:10:00.905 ]' 00:10:00.905 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:01.164 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.164 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:01.164 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:01.164 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:01.164 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.164 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.164 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.422 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:06.688 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.688 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.688 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:06.946 { 00:10:06.946 "cntlid": 3, 00:10:06.946 "qid": 0, 00:10:06.946 "state": "enabled", 00:10:06.946 "thread": "nvmf_tgt_poll_group_000", 00:10:06.946 "listen_address": { 00:10:06.946 "trtype": "TCP", 00:10:06.946 "adrfam": "IPv4", 00:10:06.946 "traddr": "10.0.0.2", 00:10:06.946 "trsvcid": "4420" 00:10:06.946 }, 00:10:06.946 "peer_address": { 00:10:06.946 "trtype": "TCP", 00:10:06.946 "adrfam": "IPv4", 00:10:06.946 "traddr": "10.0.0.1", 00:10:06.946 "trsvcid": "53798" 00:10:06.946 }, 00:10:06.946 "auth": { 00:10:06.946 "state": "completed", 00:10:06.946 "digest": "sha256", 00:10:06.946 "dhgroup": "null" 00:10:06.946 } 00:10:06.946 } 00:10:06.946 ]' 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.946 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.204 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:08.199 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.199 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.457 00:10:08.457 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:08.457 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.457 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:09.024 { 00:10:09.024 "cntlid": 5, 00:10:09.024 "qid": 0, 00:10:09.024 "state": "enabled", 00:10:09.024 "thread": "nvmf_tgt_poll_group_000", 00:10:09.024 "listen_address": { 00:10:09.024 "trtype": "TCP", 00:10:09.024 "adrfam": "IPv4", 00:10:09.024 "traddr": "10.0.0.2", 00:10:09.024 "trsvcid": "4420" 00:10:09.024 }, 00:10:09.024 "peer_address": { 00:10:09.024 "trtype": "TCP", 00:10:09.024 "adrfam": "IPv4", 00:10:09.024 "traddr": "10.0.0.1", 00:10:09.024 "trsvcid": "53824" 00:10:09.024 }, 00:10:09.024 "auth": { 00:10:09.024 "state": "completed", 00:10:09.024 "digest": "sha256", 00:10:09.024 "dhgroup": "null" 00:10:09.024 } 00:10:09.024 } 00:10:09.024 ]' 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.024 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.283 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.218 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:10.219 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:10.476 00:10:10.735 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:10.735 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.735 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:10.994 { 00:10:10.994 "cntlid": 7, 00:10:10.994 "qid": 0, 00:10:10.994 "state": "enabled", 00:10:10.994 "thread": "nvmf_tgt_poll_group_000", 00:10:10.994 "listen_address": { 00:10:10.994 "trtype": "TCP", 00:10:10.994 "adrfam": "IPv4", 00:10:10.994 "traddr": "10.0.0.2", 00:10:10.994 "trsvcid": "4420" 00:10:10.994 }, 00:10:10.994 "peer_address": { 00:10:10.994 "trtype": "TCP", 00:10:10.994 "adrfam": "IPv4", 00:10:10.994 "traddr": "10.0.0.1", 00:10:10.994 "trsvcid": "53858" 00:10:10.994 }, 00:10:10.994 "auth": { 00:10:10.994 "state": "completed", 00:10:10.994 "digest": "sha256", 00:10:10.994 "dhgroup": "null" 00:10:10.994 } 00:10:10.994 } 00:10:10.994 ]' 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:10.994 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.995 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.995 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.253 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:12.189 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.189 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.447 00:10:12.447 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:12.447 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.447 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.706 { 00:10:12.706 "cntlid": 9, 00:10:12.706 "qid": 0, 00:10:12.706 "state": "enabled", 00:10:12.706 "thread": "nvmf_tgt_poll_group_000", 00:10:12.706 "listen_address": { 00:10:12.706 "trtype": "TCP", 00:10:12.706 "adrfam": "IPv4", 00:10:12.706 "traddr": "10.0.0.2", 00:10:12.706 "trsvcid": "4420" 00:10:12.706 }, 00:10:12.706 "peer_address": { 00:10:12.706 "trtype": "TCP", 00:10:12.706 "adrfam": "IPv4", 00:10:12.706 "traddr": "10.0.0.1", 00:10:12.706 "trsvcid": "53888" 00:10:12.706 }, 00:10:12.706 "auth": { 00:10:12.706 "state": "completed", 00:10:12.706 "digest": "sha256", 00:10:12.706 "dhgroup": "ffdhe2048" 00:10:12.706 } 00:10:12.706 } 00:10:12.706 ]' 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.706 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.964 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:12.964 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.964 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.964 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.964 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.223 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:13.790 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.048 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.306 00:10:14.306 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:14.306 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:14.306 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.564 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.564 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.564 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.564 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:14.823 { 00:10:14.823 "cntlid": 11, 00:10:14.823 "qid": 0, 00:10:14.823 "state": "enabled", 00:10:14.823 "thread": "nvmf_tgt_poll_group_000", 00:10:14.823 "listen_address": { 00:10:14.823 "trtype": "TCP", 00:10:14.823 "adrfam": "IPv4", 00:10:14.823 "traddr": "10.0.0.2", 00:10:14.823 "trsvcid": "4420" 00:10:14.823 }, 00:10:14.823 "peer_address": { 00:10:14.823 "trtype": "TCP", 00:10:14.823 "adrfam": "IPv4", 00:10:14.823 "traddr": "10.0.0.1", 00:10:14.823 "trsvcid": "53918" 00:10:14.823 }, 00:10:14.823 "auth": { 00:10:14.823 "state": "completed", 00:10:14.823 "digest": "sha256", 00:10:14.823 "dhgroup": "ffdhe2048" 00:10:14.823 } 00:10:14.823 } 00:10:14.823 ]' 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.823 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.188 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:15.756 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.016 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.275 00:10:16.275 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:16.275 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:16.275 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.534 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.534 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.534 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.534 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.534 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:16.534 { 00:10:16.534 "cntlid": 13, 00:10:16.534 "qid": 0, 00:10:16.534 "state": "enabled", 00:10:16.534 "thread": "nvmf_tgt_poll_group_000", 00:10:16.534 "listen_address": { 00:10:16.534 "trtype": "TCP", 00:10:16.534 "adrfam": "IPv4", 00:10:16.534 "traddr": "10.0.0.2", 00:10:16.534 "trsvcid": "4420" 00:10:16.534 }, 00:10:16.534 "peer_address": { 00:10:16.534 "trtype": "TCP", 00:10:16.534 "adrfam": "IPv4", 00:10:16.534 "traddr": "10.0.0.1", 00:10:16.534 "trsvcid": "47116" 00:10:16.534 }, 00:10:16.534 "auth": { 00:10:16.534 "state": "completed", 00:10:16.534 "digest": "sha256", 00:10:16.534 "dhgroup": "ffdhe2048" 00:10:16.534 } 00:10:16.534 } 00:10:16.534 ]' 00:10:16.534 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:16.793 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.793 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:16.793 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:16.793 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:16.793 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.793 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.793 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.051 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.999 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:17.999 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:18.674 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.674 { 00:10:18.674 "cntlid": 15, 00:10:18.674 "qid": 0, 00:10:18.674 "state": "enabled", 00:10:18.674 "thread": "nvmf_tgt_poll_group_000", 00:10:18.674 "listen_address": { 00:10:18.674 "trtype": "TCP", 00:10:18.674 "adrfam": "IPv4", 00:10:18.674 "traddr": "10.0.0.2", 00:10:18.674 "trsvcid": "4420" 00:10:18.674 }, 00:10:18.674 "peer_address": { 00:10:18.674 "trtype": "TCP", 00:10:18.674 "adrfam": "IPv4", 00:10:18.674 "traddr": "10.0.0.1", 00:10:18.674 "trsvcid": "47140" 00:10:18.674 }, 00:10:18.674 "auth": { 00:10:18.674 "state": "completed", 00:10:18.674 "digest": "sha256", 00:10:18.674 "dhgroup": "ffdhe2048" 00:10:18.674 } 00:10:18.674 } 00:10:18.674 ]' 00:10:18.674 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.933 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.933 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.933 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:18.933 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.933 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.933 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.933 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.192 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:20.128 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.128 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.692 00:10:20.692 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:20.692 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:20.692 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:20.949 { 00:10:20.949 "cntlid": 17, 00:10:20.949 "qid": 0, 00:10:20.949 "state": "enabled", 00:10:20.949 "thread": "nvmf_tgt_poll_group_000", 00:10:20.949 "listen_address": { 00:10:20.949 "trtype": "TCP", 00:10:20.949 "adrfam": "IPv4", 00:10:20.949 "traddr": "10.0.0.2", 00:10:20.949 "trsvcid": "4420" 00:10:20.949 }, 00:10:20.949 "peer_address": { 00:10:20.949 "trtype": "TCP", 00:10:20.949 "adrfam": "IPv4", 00:10:20.949 "traddr": "10.0.0.1", 00:10:20.949 "trsvcid": "47168" 00:10:20.949 }, 00:10:20.949 "auth": { 00:10:20.949 "state": "completed", 00:10:20.949 "digest": "sha256", 00:10:20.949 "dhgroup": "ffdhe3072" 00:10:20.949 } 00:10:20.949 } 00:10:20.949 ]' 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.949 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.217 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:22.152 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.408 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.665 00:10:22.665 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.665 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.665 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.922 { 00:10:22.922 "cntlid": 19, 00:10:22.922 "qid": 0, 00:10:22.922 "state": "enabled", 00:10:22.922 "thread": "nvmf_tgt_poll_group_000", 00:10:22.922 "listen_address": { 00:10:22.922 "trtype": "TCP", 00:10:22.922 "adrfam": "IPv4", 00:10:22.922 "traddr": "10.0.0.2", 00:10:22.922 "trsvcid": "4420" 00:10:22.922 }, 00:10:22.922 "peer_address": { 00:10:22.922 "trtype": "TCP", 00:10:22.922 "adrfam": "IPv4", 00:10:22.922 "traddr": "10.0.0.1", 00:10:22.922 "trsvcid": "47196" 00:10:22.922 }, 00:10:22.922 "auth": { 00:10:22.922 "state": "completed", 00:10:22.922 "digest": "sha256", 00:10:22.922 "dhgroup": "ffdhe3072" 00:10:22.922 } 00:10:22.922 } 00:10:22.922 ]' 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.922 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:23.179 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:23.179 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:23.179 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.179 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.179 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.436 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.001 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.259 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.516 00:10:24.516 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.516 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.516 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.773 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.773 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.773 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.773 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.773 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.773 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.773 { 00:10:24.773 "cntlid": 21, 00:10:24.773 "qid": 0, 00:10:24.773 "state": "enabled", 00:10:24.773 "thread": "nvmf_tgt_poll_group_000", 00:10:24.773 "listen_address": { 00:10:24.773 "trtype": "TCP", 00:10:24.773 "adrfam": "IPv4", 00:10:24.773 "traddr": "10.0.0.2", 00:10:24.773 "trsvcid": "4420" 00:10:24.773 }, 00:10:24.773 "peer_address": { 00:10:24.773 "trtype": "TCP", 00:10:24.773 "adrfam": "IPv4", 00:10:24.773 "traddr": "10.0.0.1", 00:10:24.774 "trsvcid": "47230" 00:10:24.774 }, 00:10:24.774 "auth": { 00:10:24.774 "state": "completed", 00:10:24.774 "digest": "sha256", 00:10:24.774 "dhgroup": "ffdhe3072" 00:10:24.774 } 00:10:24.774 } 00:10:24.774 ]' 00:10:24.774 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:25.031 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.031 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:25.031 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:25.031 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:25.031 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.031 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.031 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.288 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:25.853 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.109 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:26.109 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:26.109 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:26.110 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:26.367 00:10:26.367 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:26.367 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:26.367 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.625 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.625 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.625 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.625 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.625 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.625 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:26.625 { 00:10:26.625 "cntlid": 23, 00:10:26.625 "qid": 0, 00:10:26.625 "state": "enabled", 00:10:26.625 "thread": "nvmf_tgt_poll_group_000", 00:10:26.625 "listen_address": { 00:10:26.625 "trtype": "TCP", 00:10:26.625 "adrfam": "IPv4", 00:10:26.625 "traddr": "10.0.0.2", 00:10:26.625 "trsvcid": "4420" 00:10:26.625 }, 00:10:26.625 "peer_address": { 00:10:26.625 "trtype": "TCP", 00:10:26.625 "adrfam": "IPv4", 00:10:26.625 "traddr": "10.0.0.1", 00:10:26.625 "trsvcid": "47024" 00:10:26.625 }, 00:10:26.625 "auth": { 00:10:26.625 "state": "completed", 00:10:26.625 "digest": "sha256", 00:10:26.625 "dhgroup": "ffdhe3072" 00:10:26.625 } 00:10:26.625 } 00:10:26.625 ]' 00:10:26.625 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:26.882 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.882 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:26.882 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:26.882 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:26.883 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.883 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.883 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.141 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:27.707 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:27.967 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:27.967 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.967 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.967 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:27.967 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:27.967 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.967 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.967 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.967 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.967 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.967 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.532 00:10:28.532 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.533 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:28.533 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.791 { 00:10:28.791 "cntlid": 25, 00:10:28.791 "qid": 0, 00:10:28.791 "state": "enabled", 00:10:28.791 "thread": "nvmf_tgt_poll_group_000", 00:10:28.791 "listen_address": { 00:10:28.791 "trtype": "TCP", 00:10:28.791 "adrfam": "IPv4", 00:10:28.791 "traddr": "10.0.0.2", 00:10:28.791 "trsvcid": "4420" 00:10:28.791 }, 00:10:28.791 "peer_address": { 00:10:28.791 "trtype": "TCP", 00:10:28.791 "adrfam": "IPv4", 00:10:28.791 "traddr": "10.0.0.1", 00:10:28.791 "trsvcid": "47054" 00:10:28.791 }, 00:10:28.791 "auth": { 00:10:28.791 "state": "completed", 00:10:28.791 "digest": "sha256", 00:10:28.791 "dhgroup": "ffdhe4096" 00:10:28.791 } 00:10:28.791 } 00:10:28.791 ]' 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.791 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.049 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:29.986 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.986 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.553 00:10:30.553 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.553 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.553 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.811 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.811 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.811 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.811 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.811 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.811 { 00:10:30.811 "cntlid": 27, 00:10:30.811 "qid": 0, 00:10:30.811 "state": "enabled", 00:10:30.811 "thread": "nvmf_tgt_poll_group_000", 00:10:30.811 "listen_address": { 00:10:30.811 "trtype": "TCP", 00:10:30.811 "adrfam": "IPv4", 00:10:30.811 "traddr": "10.0.0.2", 00:10:30.811 "trsvcid": "4420" 00:10:30.811 }, 00:10:30.811 "peer_address": { 00:10:30.811 "trtype": "TCP", 00:10:30.811 "adrfam": "IPv4", 00:10:30.811 "traddr": "10.0.0.1", 00:10:30.811 "trsvcid": "47078" 00:10:30.811 }, 00:10:30.811 "auth": { 00:10:30.811 "state": "completed", 00:10:30.811 "digest": "sha256", 00:10:30.812 "dhgroup": "ffdhe4096" 00:10:30.812 } 00:10:30.812 } 00:10:30.812 ]' 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.812 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.070 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.006 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.264 00:10:32.264 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.264 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.264 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.523 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.523 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.523 12:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.523 12:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.523 12:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.523 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.523 { 00:10:32.523 "cntlid": 29, 00:10:32.523 "qid": 0, 00:10:32.523 "state": "enabled", 00:10:32.523 "thread": "nvmf_tgt_poll_group_000", 00:10:32.523 "listen_address": { 00:10:32.523 "trtype": "TCP", 00:10:32.523 "adrfam": "IPv4", 00:10:32.523 "traddr": "10.0.0.2", 00:10:32.523 "trsvcid": "4420" 00:10:32.523 }, 00:10:32.523 "peer_address": { 00:10:32.523 "trtype": "TCP", 00:10:32.523 "adrfam": "IPv4", 00:10:32.523 "traddr": "10.0.0.1", 00:10:32.523 "trsvcid": "47104" 00:10:32.523 }, 00:10:32.523 "auth": { 00:10:32.523 "state": "completed", 00:10:32.523 "digest": "sha256", 00:10:32.523 "dhgroup": "ffdhe4096" 00:10:32.523 } 00:10:32.523 } 00:10:32.523 ]' 00:10:32.523 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.782 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.782 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.782 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:32.782 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.782 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.782 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.782 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.042 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.609 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:33.868 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:34.127 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.385 { 00:10:34.385 "cntlid": 31, 00:10:34.385 "qid": 0, 00:10:34.385 "state": "enabled", 00:10:34.385 "thread": "nvmf_tgt_poll_group_000", 00:10:34.385 "listen_address": { 00:10:34.385 "trtype": "TCP", 00:10:34.385 "adrfam": "IPv4", 00:10:34.385 "traddr": "10.0.0.2", 00:10:34.385 "trsvcid": "4420" 00:10:34.385 }, 00:10:34.385 "peer_address": { 00:10:34.385 "trtype": "TCP", 00:10:34.385 "adrfam": "IPv4", 00:10:34.385 "traddr": "10.0.0.1", 00:10:34.385 "trsvcid": "47146" 00:10:34.385 }, 00:10:34.385 "auth": { 00:10:34.385 "state": "completed", 00:10:34.385 "digest": "sha256", 00:10:34.385 "dhgroup": "ffdhe4096" 00:10:34.385 } 00:10:34.385 } 00:10:34.385 ]' 00:10:34.385 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.643 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.643 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.643 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.643 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.643 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.643 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.643 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.902 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.838 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.411 00:10:36.411 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.411 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.411 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.669 { 00:10:36.669 "cntlid": 33, 00:10:36.669 "qid": 0, 00:10:36.669 "state": "enabled", 00:10:36.669 "thread": "nvmf_tgt_poll_group_000", 00:10:36.669 "listen_address": { 00:10:36.669 "trtype": "TCP", 00:10:36.669 "adrfam": "IPv4", 00:10:36.669 "traddr": "10.0.0.2", 00:10:36.669 "trsvcid": "4420" 00:10:36.669 }, 00:10:36.669 "peer_address": { 00:10:36.669 "trtype": "TCP", 00:10:36.669 "adrfam": "IPv4", 00:10:36.669 "traddr": "10.0.0.1", 00:10:36.669 "trsvcid": "40062" 00:10:36.669 }, 00:10:36.669 "auth": { 00:10:36.669 "state": "completed", 00:10:36.669 "digest": "sha256", 00:10:36.669 "dhgroup": "ffdhe6144" 00:10:36.669 } 00:10:36.669 } 00:10:36.669 ]' 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.669 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.926 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:36.926 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.926 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.926 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.926 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.184 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:37.794 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.052 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.053 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.053 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.053 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.620 00:10:38.620 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.620 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.620 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.877 { 00:10:38.877 "cntlid": 35, 00:10:38.877 "qid": 0, 00:10:38.877 "state": "enabled", 00:10:38.877 "thread": "nvmf_tgt_poll_group_000", 00:10:38.877 "listen_address": { 00:10:38.877 "trtype": "TCP", 00:10:38.877 "adrfam": "IPv4", 00:10:38.877 "traddr": "10.0.0.2", 00:10:38.877 "trsvcid": "4420" 00:10:38.877 }, 00:10:38.877 "peer_address": { 00:10:38.877 "trtype": "TCP", 00:10:38.877 "adrfam": "IPv4", 00:10:38.877 "traddr": "10.0.0.1", 00:10:38.877 "trsvcid": "40092" 00:10:38.877 }, 00:10:38.877 "auth": { 00:10:38.877 "state": "completed", 00:10:38.877 "digest": "sha256", 00:10:38.877 "dhgroup": "ffdhe6144" 00:10:38.877 } 00:10:38.877 } 00:10:38.877 ]' 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.877 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.134 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:40.066 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.066 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.632 00:10:40.632 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.632 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.632 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.891 { 00:10:40.891 "cntlid": 37, 00:10:40.891 "qid": 0, 00:10:40.891 "state": "enabled", 00:10:40.891 "thread": "nvmf_tgt_poll_group_000", 00:10:40.891 "listen_address": { 00:10:40.891 "trtype": "TCP", 00:10:40.891 "adrfam": "IPv4", 00:10:40.891 "traddr": "10.0.0.2", 00:10:40.891 "trsvcid": "4420" 00:10:40.891 }, 00:10:40.891 "peer_address": { 00:10:40.891 "trtype": "TCP", 00:10:40.891 "adrfam": "IPv4", 00:10:40.891 "traddr": "10.0.0.1", 00:10:40.891 "trsvcid": "40122" 00:10:40.891 }, 00:10:40.891 "auth": { 00:10:40.891 "state": "completed", 00:10:40.891 "digest": "sha256", 00:10:40.891 "dhgroup": "ffdhe6144" 00:10:40.891 } 00:10:40.891 } 00:10:40.891 ]' 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.891 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.149 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:42.086 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:42.347 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.348 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.609 00:10:42.609 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.609 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.609 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.866 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.866 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.866 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.866 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.866 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.866 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.866 { 00:10:42.866 "cntlid": 39, 00:10:42.866 "qid": 0, 00:10:42.866 "state": "enabled", 00:10:42.866 "thread": "nvmf_tgt_poll_group_000", 00:10:42.866 "listen_address": { 00:10:42.866 "trtype": "TCP", 00:10:42.866 "adrfam": "IPv4", 00:10:42.866 "traddr": "10.0.0.2", 00:10:42.866 "trsvcid": "4420" 00:10:42.866 }, 00:10:42.866 "peer_address": { 00:10:42.866 "trtype": "TCP", 00:10:42.866 "adrfam": "IPv4", 00:10:42.866 "traddr": "10.0.0.1", 00:10:42.866 "trsvcid": "40148" 00:10:42.866 }, 00:10:42.866 "auth": { 00:10:42.866 "state": "completed", 00:10:42.866 "digest": "sha256", 00:10:42.866 "dhgroup": "ffdhe6144" 00:10:42.866 } 00:10:42.866 } 00:10:42.866 ]' 00:10:42.866 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.124 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.124 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.124 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:43.124 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.124 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.124 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.124 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.382 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:10:43.947 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.947 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:43.947 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.947 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.205 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.141 00:10:45.141 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.141 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.141 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.141 { 00:10:45.141 "cntlid": 41, 00:10:45.141 "qid": 0, 00:10:45.141 "state": "enabled", 00:10:45.141 "thread": "nvmf_tgt_poll_group_000", 00:10:45.141 "listen_address": { 00:10:45.141 "trtype": "TCP", 00:10:45.141 "adrfam": "IPv4", 00:10:45.141 "traddr": "10.0.0.2", 00:10:45.141 "trsvcid": "4420" 00:10:45.141 }, 00:10:45.141 "peer_address": { 00:10:45.141 "trtype": "TCP", 00:10:45.141 "adrfam": "IPv4", 00:10:45.141 "traddr": "10.0.0.1", 00:10:45.141 "trsvcid": "40160" 00:10:45.141 }, 00:10:45.141 "auth": { 00:10:45.141 "state": "completed", 00:10:45.141 "digest": "sha256", 00:10:45.141 "dhgroup": "ffdhe8192" 00:10:45.141 } 00:10:45.141 } 00:10:45.141 ]' 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.141 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.400 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:45.400 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.400 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.400 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.400 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.722 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:46.290 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.548 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.116 00:10:47.116 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.116 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.116 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.374 { 00:10:47.374 "cntlid": 43, 00:10:47.374 "qid": 0, 00:10:47.374 "state": "enabled", 00:10:47.374 "thread": "nvmf_tgt_poll_group_000", 00:10:47.374 "listen_address": { 00:10:47.374 "trtype": "TCP", 00:10:47.374 "adrfam": "IPv4", 00:10:47.374 "traddr": "10.0.0.2", 00:10:47.374 "trsvcid": "4420" 00:10:47.374 }, 00:10:47.374 "peer_address": { 00:10:47.374 "trtype": "TCP", 00:10:47.374 "adrfam": "IPv4", 00:10:47.374 "traddr": "10.0.0.1", 00:10:47.374 "trsvcid": "60596" 00:10:47.374 }, 00:10:47.374 "auth": { 00:10:47.374 "state": "completed", 00:10:47.374 "digest": "sha256", 00:10:47.374 "dhgroup": "ffdhe8192" 00:10:47.374 } 00:10:47.374 } 00:10:47.374 ]' 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.374 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.631 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:47.631 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.631 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.631 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.631 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.890 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.457 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.719 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.286 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.605 { 00:10:49.605 "cntlid": 45, 00:10:49.605 "qid": 0, 00:10:49.605 "state": "enabled", 00:10:49.605 "thread": "nvmf_tgt_poll_group_000", 00:10:49.605 "listen_address": { 00:10:49.605 "trtype": "TCP", 00:10:49.605 "adrfam": "IPv4", 00:10:49.605 "traddr": "10.0.0.2", 00:10:49.605 "trsvcid": "4420" 00:10:49.605 }, 00:10:49.605 "peer_address": { 00:10:49.605 "trtype": "TCP", 00:10:49.605 "adrfam": "IPv4", 00:10:49.605 "traddr": "10.0.0.1", 00:10:49.605 "trsvcid": "60622" 00:10:49.605 }, 00:10:49.605 "auth": { 00:10:49.605 "state": "completed", 00:10:49.605 "digest": "sha256", 00:10:49.605 "dhgroup": "ffdhe8192" 00:10:49.605 } 00:10:49.605 } 00:10:49.605 ]' 00:10:49.605 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.865 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.865 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.865 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:49.865 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.865 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.865 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.865 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.124 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.692 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:50.950 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.887 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.887 { 00:10:51.887 "cntlid": 47, 00:10:51.887 "qid": 0, 00:10:51.887 "state": "enabled", 00:10:51.887 "thread": "nvmf_tgt_poll_group_000", 00:10:51.887 "listen_address": { 00:10:51.887 "trtype": "TCP", 00:10:51.887 "adrfam": "IPv4", 00:10:51.887 "traddr": "10.0.0.2", 00:10:51.887 "trsvcid": "4420" 00:10:51.887 }, 00:10:51.887 "peer_address": { 00:10:51.887 "trtype": "TCP", 00:10:51.887 "adrfam": "IPv4", 00:10:51.887 "traddr": "10.0.0.1", 00:10:51.887 "trsvcid": "60642" 00:10:51.887 }, 00:10:51.887 "auth": { 00:10:51.887 "state": "completed", 00:10:51.887 "digest": "sha256", 00:10:51.887 "dhgroup": "ffdhe8192" 00:10:51.887 } 00:10:51.887 } 00:10:51.887 ]' 00:10:51.887 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.147 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.147 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.147 12:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:52.147 12:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.147 12:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.147 12:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.147 12:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.406 12:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:10:52.974 12:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:52.974 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.234 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.494 00:10:53.753 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.753 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.753 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.012 { 00:10:54.012 "cntlid": 49, 00:10:54.012 "qid": 0, 00:10:54.012 "state": "enabled", 00:10:54.012 "thread": "nvmf_tgt_poll_group_000", 00:10:54.012 "listen_address": { 00:10:54.012 "trtype": "TCP", 00:10:54.012 "adrfam": "IPv4", 00:10:54.012 "traddr": "10.0.0.2", 00:10:54.012 "trsvcid": "4420" 00:10:54.012 }, 00:10:54.012 "peer_address": { 00:10:54.012 "trtype": "TCP", 00:10:54.012 "adrfam": "IPv4", 00:10:54.012 "traddr": "10.0.0.1", 00:10:54.012 "trsvcid": "60670" 00:10:54.012 }, 00:10:54.012 "auth": { 00:10:54.012 "state": "completed", 00:10:54.012 "digest": "sha384", 00:10:54.012 "dhgroup": "null" 00:10:54.012 } 00:10:54.012 } 00:10:54.012 ]' 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:54.012 12:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.012 12:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.012 12:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.012 12:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.271 12:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.207 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.773 00:10:55.773 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.773 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.773 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.031 { 00:10:56.031 "cntlid": 51, 00:10:56.031 "qid": 0, 00:10:56.031 "state": "enabled", 00:10:56.031 "thread": "nvmf_tgt_poll_group_000", 00:10:56.031 "listen_address": { 00:10:56.031 "trtype": "TCP", 00:10:56.031 "adrfam": "IPv4", 00:10:56.031 "traddr": "10.0.0.2", 00:10:56.031 "trsvcid": "4420" 00:10:56.031 }, 00:10:56.031 "peer_address": { 00:10:56.031 "trtype": "TCP", 00:10:56.031 "adrfam": "IPv4", 00:10:56.031 "traddr": "10.0.0.1", 00:10:56.031 "trsvcid": "60838" 00:10:56.031 }, 00:10:56.031 "auth": { 00:10:56.031 "state": "completed", 00:10:56.031 "digest": "sha384", 00:10:56.031 "dhgroup": "null" 00:10:56.031 } 00:10:56.031 } 00:10:56.031 ]' 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.031 12:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.289 12:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:57.222 12:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.222 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.787 00:10:57.787 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.787 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.787 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.044 { 00:10:58.044 "cntlid": 53, 00:10:58.044 "qid": 0, 00:10:58.044 "state": "enabled", 00:10:58.044 "thread": "nvmf_tgt_poll_group_000", 00:10:58.044 "listen_address": { 00:10:58.044 "trtype": "TCP", 00:10:58.044 "adrfam": "IPv4", 00:10:58.044 "traddr": "10.0.0.2", 00:10:58.044 "trsvcid": "4420" 00:10:58.044 }, 00:10:58.044 "peer_address": { 00:10:58.044 "trtype": "TCP", 00:10:58.044 "adrfam": "IPv4", 00:10:58.044 "traddr": "10.0.0.1", 00:10:58.044 "trsvcid": "60864" 00:10:58.044 }, 00:10:58.044 "auth": { 00:10:58.044 "state": "completed", 00:10:58.044 "digest": "sha384", 00:10:58.044 "dhgroup": "null" 00:10:58.044 } 00:10:58.044 } 00:10:58.044 ]' 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:58.044 12:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.044 12:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.044 12:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.044 12:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.302 12:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:59.234 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:59.800 00:10:59.800 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.800 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.800 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.059 { 00:11:00.059 "cntlid": 55, 00:11:00.059 "qid": 0, 00:11:00.059 "state": "enabled", 00:11:00.059 "thread": "nvmf_tgt_poll_group_000", 00:11:00.059 "listen_address": { 00:11:00.059 "trtype": "TCP", 00:11:00.059 "adrfam": "IPv4", 00:11:00.059 "traddr": "10.0.0.2", 00:11:00.059 "trsvcid": "4420" 00:11:00.059 }, 00:11:00.059 "peer_address": { 00:11:00.059 "trtype": "TCP", 00:11:00.059 "adrfam": "IPv4", 00:11:00.059 "traddr": "10.0.0.1", 00:11:00.059 "trsvcid": "60898" 00:11:00.059 }, 00:11:00.059 "auth": { 00:11:00.059 "state": "completed", 00:11:00.059 "digest": "sha384", 00:11:00.059 "dhgroup": "null" 00:11:00.059 } 00:11:00.059 } 00:11:00.059 ]' 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:00.059 12:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.059 12:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.059 12:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.059 12:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.317 12:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:11:00.960 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.960 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:01.219 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.219 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.219 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.219 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.219 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.219 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:01.220 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.477 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.478 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.735 00:11:01.735 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.735 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.735 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.992 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.992 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.992 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.992 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.992 12:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.992 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.992 { 00:11:01.992 "cntlid": 57, 00:11:01.992 "qid": 0, 00:11:01.992 "state": "enabled", 00:11:01.992 "thread": "nvmf_tgt_poll_group_000", 00:11:01.992 "listen_address": { 00:11:01.992 "trtype": "TCP", 00:11:01.992 "adrfam": "IPv4", 00:11:01.992 "traddr": "10.0.0.2", 00:11:01.992 "trsvcid": "4420" 00:11:01.992 }, 00:11:01.992 "peer_address": { 00:11:01.992 "trtype": "TCP", 00:11:01.992 "adrfam": "IPv4", 00:11:01.992 "traddr": "10.0.0.1", 00:11:01.992 "trsvcid": "60912" 00:11:01.992 }, 00:11:01.992 "auth": { 00:11:01.992 "state": "completed", 00:11:01.992 "digest": "sha384", 00:11:01.992 "dhgroup": "ffdhe2048" 00:11:01.992 } 00:11:01.992 } 00:11:01.992 ]' 00:11:01.992 12:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.992 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.992 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.249 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:02.249 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.249 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.249 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.249 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.507 12:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:03.073 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.331 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.589 00:11:03.848 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.848 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.848 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.105 { 00:11:04.105 "cntlid": 59, 00:11:04.105 "qid": 0, 00:11:04.105 "state": "enabled", 00:11:04.105 "thread": "nvmf_tgt_poll_group_000", 00:11:04.105 "listen_address": { 00:11:04.105 "trtype": "TCP", 00:11:04.105 "adrfam": "IPv4", 00:11:04.105 "traddr": "10.0.0.2", 00:11:04.105 "trsvcid": "4420" 00:11:04.105 }, 00:11:04.105 "peer_address": { 00:11:04.105 "trtype": "TCP", 00:11:04.105 "adrfam": "IPv4", 00:11:04.105 "traddr": "10.0.0.1", 00:11:04.105 "trsvcid": "60950" 00:11:04.105 }, 00:11:04.105 "auth": { 00:11:04.105 "state": "completed", 00:11:04.105 "digest": "sha384", 00:11:04.105 "dhgroup": "ffdhe2048" 00:11:04.105 } 00:11:04.105 } 00:11:04.105 ]' 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.105 12:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.105 12:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:04.105 12:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.105 12:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.105 12:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.105 12:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.678 12:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.245 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.503 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.761 00:11:05.761 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.761 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.761 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.018 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.018 12:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.018 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.018 12:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.018 12:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.018 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.018 { 00:11:06.018 "cntlid": 61, 00:11:06.018 "qid": 0, 00:11:06.018 "state": "enabled", 00:11:06.018 "thread": "nvmf_tgt_poll_group_000", 00:11:06.018 "listen_address": { 00:11:06.018 "trtype": "TCP", 00:11:06.018 "adrfam": "IPv4", 00:11:06.018 "traddr": "10.0.0.2", 00:11:06.018 "trsvcid": "4420" 00:11:06.018 }, 00:11:06.018 "peer_address": { 00:11:06.018 "trtype": "TCP", 00:11:06.018 "adrfam": "IPv4", 00:11:06.018 "traddr": "10.0.0.1", 00:11:06.018 "trsvcid": "42252" 00:11:06.018 }, 00:11:06.018 "auth": { 00:11:06.018 "state": "completed", 00:11:06.018 "digest": "sha384", 00:11:06.018 "dhgroup": "ffdhe2048" 00:11:06.018 } 00:11:06.018 } 00:11:06.018 ]' 00:11:06.018 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.018 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.018 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.275 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:06.275 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.275 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.275 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.275 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.533 12:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:11:07.099 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.370 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:07.370 12:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.370 12:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.370 12:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.370 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.370 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:07.370 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.631 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.889 00:11:07.889 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.889 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.889 12:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.189 { 00:11:08.189 "cntlid": 63, 00:11:08.189 "qid": 0, 00:11:08.189 "state": "enabled", 00:11:08.189 "thread": "nvmf_tgt_poll_group_000", 00:11:08.189 "listen_address": { 00:11:08.189 "trtype": "TCP", 00:11:08.189 "adrfam": "IPv4", 00:11:08.189 "traddr": "10.0.0.2", 00:11:08.189 "trsvcid": "4420" 00:11:08.189 }, 00:11:08.189 "peer_address": { 00:11:08.189 "trtype": "TCP", 00:11:08.189 "adrfam": "IPv4", 00:11:08.189 "traddr": "10.0.0.1", 00:11:08.189 "trsvcid": "42284" 00:11:08.189 }, 00:11:08.189 "auth": { 00:11:08.189 "state": "completed", 00:11:08.189 "digest": "sha384", 00:11:08.189 "dhgroup": "ffdhe2048" 00:11:08.189 } 00:11:08.189 } 00:11:08.189 ]' 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.189 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.448 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:08.448 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.448 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.448 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.448 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.705 12:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:09.273 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.532 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.791 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.791 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.791 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.049 00:11:10.049 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.049 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.049 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.307 { 00:11:10.307 "cntlid": 65, 00:11:10.307 "qid": 0, 00:11:10.307 "state": "enabled", 00:11:10.307 "thread": "nvmf_tgt_poll_group_000", 00:11:10.307 "listen_address": { 00:11:10.307 "trtype": "TCP", 00:11:10.307 "adrfam": "IPv4", 00:11:10.307 "traddr": "10.0.0.2", 00:11:10.307 "trsvcid": "4420" 00:11:10.307 }, 00:11:10.307 "peer_address": { 00:11:10.307 "trtype": "TCP", 00:11:10.307 "adrfam": "IPv4", 00:11:10.307 "traddr": "10.0.0.1", 00:11:10.307 "trsvcid": "42322" 00:11:10.307 }, 00:11:10.307 "auth": { 00:11:10.307 "state": "completed", 00:11:10.307 "digest": "sha384", 00:11:10.307 "dhgroup": "ffdhe3072" 00:11:10.307 } 00:11:10.307 } 00:11:10.307 ]' 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.307 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.566 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:10.566 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.566 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.566 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.566 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.825 12:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:11.393 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.651 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.652 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.910 00:11:11.910 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.910 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.910 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.476 { 00:11:12.476 "cntlid": 67, 00:11:12.476 "qid": 0, 00:11:12.476 "state": "enabled", 00:11:12.476 "thread": "nvmf_tgt_poll_group_000", 00:11:12.476 "listen_address": { 00:11:12.476 "trtype": "TCP", 00:11:12.476 "adrfam": "IPv4", 00:11:12.476 "traddr": "10.0.0.2", 00:11:12.476 "trsvcid": "4420" 00:11:12.476 }, 00:11:12.476 "peer_address": { 00:11:12.476 "trtype": "TCP", 00:11:12.476 "adrfam": "IPv4", 00:11:12.476 "traddr": "10.0.0.1", 00:11:12.476 "trsvcid": "42354" 00:11:12.476 }, 00:11:12.476 "auth": { 00:11:12.476 "state": "completed", 00:11:12.476 "digest": "sha384", 00:11:12.476 "dhgroup": "ffdhe3072" 00:11:12.476 } 00:11:12.476 } 00:11:12.476 ]' 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.476 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.735 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.672 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.238 00:11:14.238 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.238 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.238 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.496 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.496 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.496 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.496 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.496 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.496 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.496 { 00:11:14.496 "cntlid": 69, 00:11:14.496 "qid": 0, 00:11:14.496 "state": "enabled", 00:11:14.496 "thread": "nvmf_tgt_poll_group_000", 00:11:14.496 "listen_address": { 00:11:14.496 "trtype": "TCP", 00:11:14.496 "adrfam": "IPv4", 00:11:14.496 "traddr": "10.0.0.2", 00:11:14.496 "trsvcid": "4420" 00:11:14.496 }, 00:11:14.496 "peer_address": { 00:11:14.496 "trtype": "TCP", 00:11:14.496 "adrfam": "IPv4", 00:11:14.496 "traddr": "10.0.0.1", 00:11:14.496 "trsvcid": "42372" 00:11:14.496 }, 00:11:14.496 "auth": { 00:11:14.497 "state": "completed", 00:11:14.497 "digest": "sha384", 00:11:14.497 "dhgroup": "ffdhe3072" 00:11:14.497 } 00:11:14.497 } 00:11:14.497 ]' 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.497 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.755 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:15.691 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.950 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:16.209 00:11:16.209 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.209 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.209 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.467 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.467 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.467 12:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.467 12:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.467 12:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.467 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.467 { 00:11:16.467 "cntlid": 71, 00:11:16.467 "qid": 0, 00:11:16.467 "state": "enabled", 00:11:16.467 "thread": "nvmf_tgt_poll_group_000", 00:11:16.467 "listen_address": { 00:11:16.467 "trtype": "TCP", 00:11:16.467 "adrfam": "IPv4", 00:11:16.467 "traddr": "10.0.0.2", 00:11:16.467 "trsvcid": "4420" 00:11:16.467 }, 00:11:16.467 "peer_address": { 00:11:16.467 "trtype": "TCP", 00:11:16.467 "adrfam": "IPv4", 00:11:16.467 "traddr": "10.0.0.1", 00:11:16.467 "trsvcid": "44310" 00:11:16.467 }, 00:11:16.467 "auth": { 00:11:16.467 "state": "completed", 00:11:16.467 "digest": "sha384", 00:11:16.467 "dhgroup": "ffdhe3072" 00:11:16.467 } 00:11:16.467 } 00:11:16.467 ]' 00:11:16.467 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.726 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.726 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.726 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:16.726 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.726 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.726 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.726 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.985 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.942 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.511 00:11:18.511 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.511 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.511 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.769 { 00:11:18.769 "cntlid": 73, 00:11:18.769 "qid": 0, 00:11:18.769 "state": "enabled", 00:11:18.769 "thread": "nvmf_tgt_poll_group_000", 00:11:18.769 "listen_address": { 00:11:18.769 "trtype": "TCP", 00:11:18.769 "adrfam": "IPv4", 00:11:18.769 "traddr": "10.0.0.2", 00:11:18.769 "trsvcid": "4420" 00:11:18.769 }, 00:11:18.769 "peer_address": { 00:11:18.769 "trtype": "TCP", 00:11:18.769 "adrfam": "IPv4", 00:11:18.769 "traddr": "10.0.0.1", 00:11:18.769 "trsvcid": "44342" 00:11:18.769 }, 00:11:18.769 "auth": { 00:11:18.769 "state": "completed", 00:11:18.769 "digest": "sha384", 00:11:18.769 "dhgroup": "ffdhe4096" 00:11:18.769 } 00:11:18.769 } 00:11:18.769 ]' 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.769 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.337 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:11:19.904 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.905 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:19.905 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.905 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.905 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.905 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.905 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:19.905 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.163 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.424 00:11:20.424 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.424 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.424 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.683 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.683 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.683 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.683 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.683 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.683 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.683 { 00:11:20.683 "cntlid": 75, 00:11:20.683 "qid": 0, 00:11:20.683 "state": "enabled", 00:11:20.683 "thread": "nvmf_tgt_poll_group_000", 00:11:20.683 "listen_address": { 00:11:20.683 "trtype": "TCP", 00:11:20.683 "adrfam": "IPv4", 00:11:20.683 "traddr": "10.0.0.2", 00:11:20.683 "trsvcid": "4420" 00:11:20.683 }, 00:11:20.683 "peer_address": { 00:11:20.683 "trtype": "TCP", 00:11:20.683 "adrfam": "IPv4", 00:11:20.683 "traddr": "10.0.0.1", 00:11:20.683 "trsvcid": "44370" 00:11:20.683 }, 00:11:20.683 "auth": { 00:11:20.683 "state": "completed", 00:11:20.683 "digest": "sha384", 00:11:20.683 "dhgroup": "ffdhe4096" 00:11:20.683 } 00:11:20.683 } 00:11:20.683 ]' 00:11:20.683 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.943 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.943 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.943 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:20.943 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.943 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.943 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.943 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.202 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:11:22.138 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.138 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:22.138 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.138 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.138 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.138 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.138 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:22.139 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.139 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.706 00:11:22.706 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.706 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.706 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.964 { 00:11:22.964 "cntlid": 77, 00:11:22.964 "qid": 0, 00:11:22.964 "state": "enabled", 00:11:22.964 "thread": "nvmf_tgt_poll_group_000", 00:11:22.964 "listen_address": { 00:11:22.964 "trtype": "TCP", 00:11:22.964 "adrfam": "IPv4", 00:11:22.964 "traddr": "10.0.0.2", 00:11:22.964 "trsvcid": "4420" 00:11:22.964 }, 00:11:22.964 "peer_address": { 00:11:22.964 "trtype": "TCP", 00:11:22.964 "adrfam": "IPv4", 00:11:22.964 "traddr": "10.0.0.1", 00:11:22.964 "trsvcid": "44398" 00:11:22.964 }, 00:11:22.964 "auth": { 00:11:22.964 "state": "completed", 00:11:22.964 "digest": "sha384", 00:11:22.964 "dhgroup": "ffdhe4096" 00:11:22.964 } 00:11:22.964 } 00:11:22.964 ]' 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.964 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.965 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.965 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.965 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.965 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.965 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.223 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.159 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:24.159 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:24.727 00:11:24.727 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.727 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.727 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.985 { 00:11:24.985 "cntlid": 79, 00:11:24.985 "qid": 0, 00:11:24.985 "state": "enabled", 00:11:24.985 "thread": "nvmf_tgt_poll_group_000", 00:11:24.985 "listen_address": { 00:11:24.985 "trtype": "TCP", 00:11:24.985 "adrfam": "IPv4", 00:11:24.985 "traddr": "10.0.0.2", 00:11:24.985 "trsvcid": "4420" 00:11:24.985 }, 00:11:24.985 "peer_address": { 00:11:24.985 "trtype": "TCP", 00:11:24.985 "adrfam": "IPv4", 00:11:24.985 "traddr": "10.0.0.1", 00:11:24.985 "trsvcid": "44442" 00:11:24.985 }, 00:11:24.985 "auth": { 00:11:24.985 "state": "completed", 00:11:24.985 "digest": "sha384", 00:11:24.985 "dhgroup": "ffdhe4096" 00:11:24.985 } 00:11:24.985 } 00:11:24.985 ]' 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.985 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.243 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.177 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.177 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.744 00:11:26.744 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.744 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.744 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.003 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.003 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.003 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.003 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.003 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.003 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.003 { 00:11:27.003 "cntlid": 81, 00:11:27.003 "qid": 0, 00:11:27.003 "state": "enabled", 00:11:27.003 "thread": "nvmf_tgt_poll_group_000", 00:11:27.003 "listen_address": { 00:11:27.003 "trtype": "TCP", 00:11:27.003 "adrfam": "IPv4", 00:11:27.003 "traddr": "10.0.0.2", 00:11:27.003 "trsvcid": "4420" 00:11:27.003 }, 00:11:27.003 "peer_address": { 00:11:27.003 "trtype": "TCP", 00:11:27.003 "adrfam": "IPv4", 00:11:27.003 "traddr": "10.0.0.1", 00:11:27.003 "trsvcid": "41346" 00:11:27.003 }, 00:11:27.003 "auth": { 00:11:27.003 "state": "completed", 00:11:27.003 "digest": "sha384", 00:11:27.003 "dhgroup": "ffdhe6144" 00:11:27.003 } 00:11:27.003 } 00:11:27.003 ]' 00:11:27.003 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.003 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.003 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.267 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.267 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.267 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.267 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.267 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.526 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:28.090 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.730 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.988 00:11:28.989 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.989 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.989 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.247 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.247 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.247 12:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.247 12:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.248 12:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.248 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.248 { 00:11:29.248 "cntlid": 83, 00:11:29.248 "qid": 0, 00:11:29.248 "state": "enabled", 00:11:29.248 "thread": "nvmf_tgt_poll_group_000", 00:11:29.248 "listen_address": { 00:11:29.248 "trtype": "TCP", 00:11:29.248 "adrfam": "IPv4", 00:11:29.248 "traddr": "10.0.0.2", 00:11:29.248 "trsvcid": "4420" 00:11:29.248 }, 00:11:29.248 "peer_address": { 00:11:29.248 "trtype": "TCP", 00:11:29.248 "adrfam": "IPv4", 00:11:29.248 "traddr": "10.0.0.1", 00:11:29.248 "trsvcid": "41374" 00:11:29.248 }, 00:11:29.248 "auth": { 00:11:29.248 "state": "completed", 00:11:29.248 "digest": "sha384", 00:11:29.248 "dhgroup": "ffdhe6144" 00:11:29.248 } 00:11:29.248 } 00:11:29.248 ]' 00:11:29.248 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.507 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.507 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.507 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:29.507 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.507 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.507 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.507 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.764 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:11:30.699 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.699 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:30.699 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.700 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.266 00:11:31.267 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.267 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.267 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.526 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.526 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.526 12:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.526 12:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.526 12:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.526 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.526 { 00:11:31.526 "cntlid": 85, 00:11:31.526 "qid": 0, 00:11:31.526 "state": "enabled", 00:11:31.526 "thread": "nvmf_tgt_poll_group_000", 00:11:31.526 "listen_address": { 00:11:31.526 "trtype": "TCP", 00:11:31.526 "adrfam": "IPv4", 00:11:31.526 "traddr": "10.0.0.2", 00:11:31.526 "trsvcid": "4420" 00:11:31.526 }, 00:11:31.526 "peer_address": { 00:11:31.526 "trtype": "TCP", 00:11:31.526 "adrfam": "IPv4", 00:11:31.526 "traddr": "10.0.0.1", 00:11:31.526 "trsvcid": "41408" 00:11:31.526 }, 00:11:31.526 "auth": { 00:11:31.526 "state": "completed", 00:11:31.526 "digest": "sha384", 00:11:31.526 "dhgroup": "ffdhe6144" 00:11:31.526 } 00:11:31.526 } 00:11:31.526 ]' 00:11:31.526 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.784 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.784 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.784 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:31.784 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.784 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.784 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.785 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.044 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:11:32.980 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.981 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.547 00:11:33.547 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.547 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.547 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.805 { 00:11:33.805 "cntlid": 87, 00:11:33.805 "qid": 0, 00:11:33.805 "state": "enabled", 00:11:33.805 "thread": "nvmf_tgt_poll_group_000", 00:11:33.805 "listen_address": { 00:11:33.805 "trtype": "TCP", 00:11:33.805 "adrfam": "IPv4", 00:11:33.805 "traddr": "10.0.0.2", 00:11:33.805 "trsvcid": "4420" 00:11:33.805 }, 00:11:33.805 "peer_address": { 00:11:33.805 "trtype": "TCP", 00:11:33.805 "adrfam": "IPv4", 00:11:33.805 "traddr": "10.0.0.1", 00:11:33.805 "trsvcid": "41442" 00:11:33.805 }, 00:11:33.805 "auth": { 00:11:33.805 "state": "completed", 00:11:33.805 "digest": "sha384", 00:11:33.805 "dhgroup": "ffdhe6144" 00:11:33.805 } 00:11:33.805 } 00:11:33.805 ]' 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.805 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.090 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:34.090 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.090 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.090 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.090 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.348 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:34.912 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.170 12:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.428 12:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.428 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.428 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.993 00:11:35.993 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.993 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.993 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.251 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.251 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.251 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.251 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.251 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.251 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.251 { 00:11:36.251 "cntlid": 89, 00:11:36.251 "qid": 0, 00:11:36.251 "state": "enabled", 00:11:36.251 "thread": "nvmf_tgt_poll_group_000", 00:11:36.251 "listen_address": { 00:11:36.251 "trtype": "TCP", 00:11:36.251 "adrfam": "IPv4", 00:11:36.251 "traddr": "10.0.0.2", 00:11:36.251 "trsvcid": "4420" 00:11:36.251 }, 00:11:36.251 "peer_address": { 00:11:36.251 "trtype": "TCP", 00:11:36.251 "adrfam": "IPv4", 00:11:36.251 "traddr": "10.0.0.1", 00:11:36.251 "trsvcid": "37756" 00:11:36.251 }, 00:11:36.252 "auth": { 00:11:36.252 "state": "completed", 00:11:36.252 "digest": "sha384", 00:11:36.252 "dhgroup": "ffdhe8192" 00:11:36.252 } 00:11:36.252 } 00:11:36.252 ]' 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.252 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.817 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:37.380 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.637 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.203 00:11:38.203 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.203 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.203 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.460 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.460 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.460 12:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.460 12:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.460 12:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.460 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.460 { 00:11:38.460 "cntlid": 91, 00:11:38.460 "qid": 0, 00:11:38.460 "state": "enabled", 00:11:38.460 "thread": "nvmf_tgt_poll_group_000", 00:11:38.460 "listen_address": { 00:11:38.460 "trtype": "TCP", 00:11:38.460 "adrfam": "IPv4", 00:11:38.460 "traddr": "10.0.0.2", 00:11:38.460 "trsvcid": "4420" 00:11:38.460 }, 00:11:38.460 "peer_address": { 00:11:38.460 "trtype": "TCP", 00:11:38.460 "adrfam": "IPv4", 00:11:38.460 "traddr": "10.0.0.1", 00:11:38.460 "trsvcid": "37780" 00:11:38.460 }, 00:11:38.460 "auth": { 00:11:38.460 "state": "completed", 00:11:38.460 "digest": "sha384", 00:11:38.460 "dhgroup": "ffdhe8192" 00:11:38.460 } 00:11:38.460 } 00:11:38.460 ]' 00:11:38.460 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.718 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.718 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.718 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.718 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.718 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.718 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.718 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.976 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:11:39.542 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.542 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:39.542 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.542 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.801 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.801 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.801 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:39.801 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.060 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.637 00:11:40.637 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.637 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.637 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.896 { 00:11:40.896 "cntlid": 93, 00:11:40.896 "qid": 0, 00:11:40.896 "state": "enabled", 00:11:40.896 "thread": "nvmf_tgt_poll_group_000", 00:11:40.896 "listen_address": { 00:11:40.896 "trtype": "TCP", 00:11:40.896 "adrfam": "IPv4", 00:11:40.896 "traddr": "10.0.0.2", 00:11:40.896 "trsvcid": "4420" 00:11:40.896 }, 00:11:40.896 "peer_address": { 00:11:40.896 "trtype": "TCP", 00:11:40.896 "adrfam": "IPv4", 00:11:40.896 "traddr": "10.0.0.1", 00:11:40.896 "trsvcid": "37806" 00:11:40.896 }, 00:11:40.896 "auth": { 00:11:40.896 "state": "completed", 00:11:40.896 "digest": "sha384", 00:11:40.896 "dhgroup": "ffdhe8192" 00:11:40.896 } 00:11:40.896 } 00:11:40.896 ]' 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:40.896 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.155 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.155 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.155 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.155 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:11:42.091 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.091 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:42.091 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.091 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.091 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.092 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.092 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.092 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.092 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.660 00:11:42.918 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.918 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.918 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.918 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.918 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.918 12:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.919 12:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.177 12:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.177 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.177 { 00:11:43.177 "cntlid": 95, 00:11:43.177 "qid": 0, 00:11:43.177 "state": "enabled", 00:11:43.177 "thread": "nvmf_tgt_poll_group_000", 00:11:43.177 "listen_address": { 00:11:43.177 "trtype": "TCP", 00:11:43.177 "adrfam": "IPv4", 00:11:43.177 "traddr": "10.0.0.2", 00:11:43.177 "trsvcid": "4420" 00:11:43.177 }, 00:11:43.177 "peer_address": { 00:11:43.177 "trtype": "TCP", 00:11:43.177 "adrfam": "IPv4", 00:11:43.177 "traddr": "10.0.0.1", 00:11:43.177 "trsvcid": "37830" 00:11:43.177 }, 00:11:43.177 "auth": { 00:11:43.177 "state": "completed", 00:11:43.177 "digest": "sha384", 00:11:43.177 "dhgroup": "ffdhe8192" 00:11:43.177 } 00:11:43.177 } 00:11:43.177 ]' 00:11:43.177 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.177 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.177 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.177 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.177 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.177 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.177 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.177 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.436 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:44.372 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.631 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.890 00:11:44.890 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.890 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.890 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.149 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.149 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.149 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.149 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.149 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.149 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.149 { 00:11:45.149 "cntlid": 97, 00:11:45.149 "qid": 0, 00:11:45.149 "state": "enabled", 00:11:45.149 "thread": "nvmf_tgt_poll_group_000", 00:11:45.149 "listen_address": { 00:11:45.149 "trtype": "TCP", 00:11:45.149 "adrfam": "IPv4", 00:11:45.149 "traddr": "10.0.0.2", 00:11:45.149 "trsvcid": "4420" 00:11:45.149 }, 00:11:45.149 "peer_address": { 00:11:45.149 "trtype": "TCP", 00:11:45.149 "adrfam": "IPv4", 00:11:45.149 "traddr": "10.0.0.1", 00:11:45.149 "trsvcid": "37854" 00:11:45.149 }, 00:11:45.149 "auth": { 00:11:45.149 "state": "completed", 00:11:45.149 "digest": "sha512", 00:11:45.149 "dhgroup": "null" 00:11:45.149 } 00:11:45.149 } 00:11:45.149 ]' 00:11:45.149 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.408 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.408 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.408 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:45.408 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.408 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.408 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.408 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.667 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:46.235 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.802 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.061 00:11:47.061 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.061 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.061 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.331 { 00:11:47.331 "cntlid": 99, 00:11:47.331 "qid": 0, 00:11:47.331 "state": "enabled", 00:11:47.331 "thread": "nvmf_tgt_poll_group_000", 00:11:47.331 "listen_address": { 00:11:47.331 "trtype": "TCP", 00:11:47.331 "adrfam": "IPv4", 00:11:47.331 "traddr": "10.0.0.2", 00:11:47.331 "trsvcid": "4420" 00:11:47.331 }, 00:11:47.331 "peer_address": { 00:11:47.331 "trtype": "TCP", 00:11:47.331 "adrfam": "IPv4", 00:11:47.331 "traddr": "10.0.0.1", 00:11:47.331 "trsvcid": "48416" 00:11:47.331 }, 00:11:47.331 "auth": { 00:11:47.331 "state": "completed", 00:11:47.331 "digest": "sha512", 00:11:47.331 "dhgroup": "null" 00:11:47.331 } 00:11:47.331 } 00:11:47.331 ]' 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:47.331 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.598 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.598 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.598 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.880 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:48.448 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:48.706 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:48.706 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.706 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:48.706 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:48.706 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:48.707 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.707 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.707 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.707 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.707 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.707 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.707 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.277 00:11:49.277 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.277 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.277 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.536 { 00:11:49.536 "cntlid": 101, 00:11:49.536 "qid": 0, 00:11:49.536 "state": "enabled", 00:11:49.536 "thread": "nvmf_tgt_poll_group_000", 00:11:49.536 "listen_address": { 00:11:49.536 "trtype": "TCP", 00:11:49.536 "adrfam": "IPv4", 00:11:49.536 "traddr": "10.0.0.2", 00:11:49.536 "trsvcid": "4420" 00:11:49.536 }, 00:11:49.536 "peer_address": { 00:11:49.536 "trtype": "TCP", 00:11:49.536 "adrfam": "IPv4", 00:11:49.536 "traddr": "10.0.0.1", 00:11:49.536 "trsvcid": "48432" 00:11:49.536 }, 00:11:49.536 "auth": { 00:11:49.536 "state": "completed", 00:11:49.536 "digest": "sha512", 00:11:49.536 "dhgroup": "null" 00:11:49.536 } 00:11:49.536 } 00:11:49.536 ]' 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.536 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.795 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.730 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:50.988 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:51.247 00:11:51.247 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.247 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.247 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.506 { 00:11:51.506 "cntlid": 103, 00:11:51.506 "qid": 0, 00:11:51.506 "state": "enabled", 00:11:51.506 "thread": "nvmf_tgt_poll_group_000", 00:11:51.506 "listen_address": { 00:11:51.506 "trtype": "TCP", 00:11:51.506 "adrfam": "IPv4", 00:11:51.506 "traddr": "10.0.0.2", 00:11:51.506 "trsvcid": "4420" 00:11:51.506 }, 00:11:51.506 "peer_address": { 00:11:51.506 "trtype": "TCP", 00:11:51.506 "adrfam": "IPv4", 00:11:51.506 "traddr": "10.0.0.1", 00:11:51.506 "trsvcid": "48458" 00:11:51.506 }, 00:11:51.506 "auth": { 00:11:51.506 "state": "completed", 00:11:51.506 "digest": "sha512", 00:11:51.506 "dhgroup": "null" 00:11:51.506 } 00:11:51.506 } 00:11:51.506 ]' 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:51.506 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.765 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.765 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.765 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.100 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:52.667 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.926 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.184 00:11:53.184 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.184 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.184 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.750 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.750 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.750 12:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.750 12:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.750 12:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.750 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.750 { 00:11:53.750 "cntlid": 105, 00:11:53.750 "qid": 0, 00:11:53.750 "state": "enabled", 00:11:53.750 "thread": "nvmf_tgt_poll_group_000", 00:11:53.750 "listen_address": { 00:11:53.750 "trtype": "TCP", 00:11:53.750 "adrfam": "IPv4", 00:11:53.750 "traddr": "10.0.0.2", 00:11:53.750 "trsvcid": "4420" 00:11:53.750 }, 00:11:53.750 "peer_address": { 00:11:53.750 "trtype": "TCP", 00:11:53.751 "adrfam": "IPv4", 00:11:53.751 "traddr": "10.0.0.1", 00:11:53.751 "trsvcid": "48474" 00:11:53.751 }, 00:11:53.751 "auth": { 00:11:53.751 "state": "completed", 00:11:53.751 "digest": "sha512", 00:11:53.751 "dhgroup": "ffdhe2048" 00:11:53.751 } 00:11:53.751 } 00:11:53.751 ]' 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.751 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.008 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:54.942 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.207 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.465 00:11:55.465 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.465 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.465 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.723 { 00:11:55.723 "cntlid": 107, 00:11:55.723 "qid": 0, 00:11:55.723 "state": "enabled", 00:11:55.723 "thread": "nvmf_tgt_poll_group_000", 00:11:55.723 "listen_address": { 00:11:55.723 "trtype": "TCP", 00:11:55.723 "adrfam": "IPv4", 00:11:55.723 "traddr": "10.0.0.2", 00:11:55.723 "trsvcid": "4420" 00:11:55.723 }, 00:11:55.723 "peer_address": { 00:11:55.723 "trtype": "TCP", 00:11:55.723 "adrfam": "IPv4", 00:11:55.723 "traddr": "10.0.0.1", 00:11:55.723 "trsvcid": "42598" 00:11:55.723 }, 00:11:55.723 "auth": { 00:11:55.723 "state": "completed", 00:11:55.723 "digest": "sha512", 00:11:55.723 "dhgroup": "ffdhe2048" 00:11:55.723 } 00:11:55.723 } 00:11:55.723 ]' 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.723 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.981 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:55.981 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.981 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.981 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.981 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.239 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:56.806 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.064 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.631 00:11:57.631 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.631 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.631 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.899 { 00:11:57.899 "cntlid": 109, 00:11:57.899 "qid": 0, 00:11:57.899 "state": "enabled", 00:11:57.899 "thread": "nvmf_tgt_poll_group_000", 00:11:57.899 "listen_address": { 00:11:57.899 "trtype": "TCP", 00:11:57.899 "adrfam": "IPv4", 00:11:57.899 "traddr": "10.0.0.2", 00:11:57.899 "trsvcid": "4420" 00:11:57.899 }, 00:11:57.899 "peer_address": { 00:11:57.899 "trtype": "TCP", 00:11:57.899 "adrfam": "IPv4", 00:11:57.899 "traddr": "10.0.0.1", 00:11:57.899 "trsvcid": "42630" 00:11:57.899 }, 00:11:57.899 "auth": { 00:11:57.899 "state": "completed", 00:11:57.899 "digest": "sha512", 00:11:57.899 "dhgroup": "ffdhe2048" 00:11:57.899 } 00:11:57.899 } 00:11:57.899 ]' 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.899 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.188 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:59.122 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.122 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.688 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.688 { 00:11:59.688 "cntlid": 111, 00:11:59.688 "qid": 0, 00:11:59.688 "state": "enabled", 00:11:59.688 "thread": "nvmf_tgt_poll_group_000", 00:11:59.688 "listen_address": { 00:11:59.688 "trtype": "TCP", 00:11:59.688 "adrfam": "IPv4", 00:11:59.688 "traddr": "10.0.0.2", 00:11:59.688 "trsvcid": "4420" 00:11:59.688 }, 00:11:59.688 "peer_address": { 00:11:59.688 "trtype": "TCP", 00:11:59.688 "adrfam": "IPv4", 00:11:59.688 "traddr": "10.0.0.1", 00:11:59.688 "trsvcid": "42666" 00:11:59.688 }, 00:11:59.688 "auth": { 00:11:59.688 "state": "completed", 00:11:59.688 "digest": "sha512", 00:11:59.688 "dhgroup": "ffdhe2048" 00:11:59.688 } 00:11:59.688 } 00:11:59.688 ]' 00:11:59.688 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.946 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.946 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.946 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.946 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.946 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.946 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.946 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.204 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:01.138 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:01.138 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:01.138 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.138 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:01.138 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:01.138 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:01.138 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.139 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.139 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.139 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.139 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.139 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.139 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.704 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.704 { 00:12:01.704 "cntlid": 113, 00:12:01.704 "qid": 0, 00:12:01.704 "state": "enabled", 00:12:01.704 "thread": "nvmf_tgt_poll_group_000", 00:12:01.704 "listen_address": { 00:12:01.704 "trtype": "TCP", 00:12:01.704 "adrfam": "IPv4", 00:12:01.704 "traddr": "10.0.0.2", 00:12:01.704 "trsvcid": "4420" 00:12:01.704 }, 00:12:01.704 "peer_address": { 00:12:01.704 "trtype": "TCP", 00:12:01.704 "adrfam": "IPv4", 00:12:01.704 "traddr": "10.0.0.1", 00:12:01.704 "trsvcid": "42706" 00:12:01.704 }, 00:12:01.704 "auth": { 00:12:01.704 "state": "completed", 00:12:01.704 "digest": "sha512", 00:12:01.704 "dhgroup": "ffdhe3072" 00:12:01.704 } 00:12:01.704 } 00:12:01.704 ]' 00:12:01.704 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.962 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.962 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.962 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:01.962 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.962 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.962 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.962 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.220 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:03.156 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.156 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.742 00:12:03.742 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.742 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.742 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.007 { 00:12:04.007 "cntlid": 115, 00:12:04.007 "qid": 0, 00:12:04.007 "state": "enabled", 00:12:04.007 "thread": "nvmf_tgt_poll_group_000", 00:12:04.007 "listen_address": { 00:12:04.007 "trtype": "TCP", 00:12:04.007 "adrfam": "IPv4", 00:12:04.007 "traddr": "10.0.0.2", 00:12:04.007 "trsvcid": "4420" 00:12:04.007 }, 00:12:04.007 "peer_address": { 00:12:04.007 "trtype": "TCP", 00:12:04.007 "adrfam": "IPv4", 00:12:04.007 "traddr": "10.0.0.1", 00:12:04.007 "trsvcid": "42724" 00:12:04.007 }, 00:12:04.007 "auth": { 00:12:04.007 "state": "completed", 00:12:04.007 "digest": "sha512", 00:12:04.007 "dhgroup": "ffdhe3072" 00:12:04.007 } 00:12:04.007 } 00:12:04.007 ]' 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.007 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.266 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.198 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.198 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.763 00:12:05.763 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.763 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.763 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.763 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.763 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.763 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.763 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.020 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.020 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.020 { 00:12:06.021 "cntlid": 117, 00:12:06.021 "qid": 0, 00:12:06.021 "state": "enabled", 00:12:06.021 "thread": "nvmf_tgt_poll_group_000", 00:12:06.021 "listen_address": { 00:12:06.021 "trtype": "TCP", 00:12:06.021 "adrfam": "IPv4", 00:12:06.021 "traddr": "10.0.0.2", 00:12:06.021 "trsvcid": "4420" 00:12:06.021 }, 00:12:06.021 "peer_address": { 00:12:06.021 "trtype": "TCP", 00:12:06.021 "adrfam": "IPv4", 00:12:06.021 "traddr": "10.0.0.1", 00:12:06.021 "trsvcid": "45040" 00:12:06.021 }, 00:12:06.021 "auth": { 00:12:06.021 "state": "completed", 00:12:06.021 "digest": "sha512", 00:12:06.021 "dhgroup": "ffdhe3072" 00:12:06.021 } 00:12:06.021 } 00:12:06.021 ]' 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.021 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.278 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.212 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:07.212 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:07.776 00:12:07.776 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.776 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.776 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.034 { 00:12:08.034 "cntlid": 119, 00:12:08.034 "qid": 0, 00:12:08.034 "state": "enabled", 00:12:08.034 "thread": "nvmf_tgt_poll_group_000", 00:12:08.034 "listen_address": { 00:12:08.034 "trtype": "TCP", 00:12:08.034 "adrfam": "IPv4", 00:12:08.034 "traddr": "10.0.0.2", 00:12:08.034 "trsvcid": "4420" 00:12:08.034 }, 00:12:08.034 "peer_address": { 00:12:08.034 "trtype": "TCP", 00:12:08.034 "adrfam": "IPv4", 00:12:08.034 "traddr": "10.0.0.1", 00:12:08.034 "trsvcid": "45078" 00:12:08.034 }, 00:12:08.034 "auth": { 00:12:08.034 "state": "completed", 00:12:08.034 "digest": "sha512", 00:12:08.034 "dhgroup": "ffdhe3072" 00:12:08.034 } 00:12:08.034 } 00:12:08.034 ]' 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.034 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.292 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.227 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:09.228 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.228 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.794 00:12:09.794 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.794 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.794 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.051 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.051 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.051 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.051 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.051 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.051 { 00:12:10.051 "cntlid": 121, 00:12:10.051 "qid": 0, 00:12:10.051 "state": "enabled", 00:12:10.051 "thread": "nvmf_tgt_poll_group_000", 00:12:10.051 "listen_address": { 00:12:10.051 "trtype": "TCP", 00:12:10.051 "adrfam": "IPv4", 00:12:10.051 "traddr": "10.0.0.2", 00:12:10.051 "trsvcid": "4420" 00:12:10.051 }, 00:12:10.051 "peer_address": { 00:12:10.051 "trtype": "TCP", 00:12:10.051 "adrfam": "IPv4", 00:12:10.051 "traddr": "10.0.0.1", 00:12:10.051 "trsvcid": "45098" 00:12:10.051 }, 00:12:10.051 "auth": { 00:12:10.051 "state": "completed", 00:12:10.051 "digest": "sha512", 00:12:10.051 "dhgroup": "ffdhe4096" 00:12:10.051 } 00:12:10.051 } 00:12:10.051 ]' 00:12:10.051 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.051 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.051 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.051 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.051 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.310 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.310 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.311 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.311 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:11.245 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.503 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.761 00:12:11.761 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.761 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.761 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.021 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.021 { 00:12:12.021 "cntlid": 123, 00:12:12.021 "qid": 0, 00:12:12.021 "state": "enabled", 00:12:12.021 "thread": "nvmf_tgt_poll_group_000", 00:12:12.021 "listen_address": { 00:12:12.021 "trtype": "TCP", 00:12:12.021 "adrfam": "IPv4", 00:12:12.021 "traddr": "10.0.0.2", 00:12:12.021 "trsvcid": "4420" 00:12:12.021 }, 00:12:12.021 "peer_address": { 00:12:12.021 "trtype": "TCP", 00:12:12.021 "adrfam": "IPv4", 00:12:12.021 "traddr": "10.0.0.1", 00:12:12.021 "trsvcid": "45132" 00:12:12.021 }, 00:12:12.021 "auth": { 00:12:12.021 "state": "completed", 00:12:12.021 "digest": "sha512", 00:12:12.021 "dhgroup": "ffdhe4096" 00:12:12.021 } 00:12:12.021 } 00:12:12.021 ]' 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.021 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.280 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.280 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.280 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.280 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.281 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.539 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.107 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.366 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.934 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.934 { 00:12:13.934 "cntlid": 125, 00:12:13.934 "qid": 0, 00:12:13.934 "state": "enabled", 00:12:13.934 "thread": "nvmf_tgt_poll_group_000", 00:12:13.934 "listen_address": { 00:12:13.934 "trtype": "TCP", 00:12:13.934 "adrfam": "IPv4", 00:12:13.934 "traddr": "10.0.0.2", 00:12:13.934 "trsvcid": "4420" 00:12:13.934 }, 00:12:13.934 "peer_address": { 00:12:13.934 "trtype": "TCP", 00:12:13.934 "adrfam": "IPv4", 00:12:13.934 "traddr": "10.0.0.1", 00:12:13.934 "trsvcid": "45166" 00:12:13.934 }, 00:12:13.934 "auth": { 00:12:13.934 "state": "completed", 00:12:13.934 "digest": "sha512", 00:12:13.934 "dhgroup": "ffdhe4096" 00:12:13.934 } 00:12:13.934 } 00:12:13.934 ]' 00:12:13.934 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.193 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.193 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.193 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.193 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.193 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.193 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.193 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.452 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.019 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.277 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.844 00:12:15.844 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.844 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.844 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.103 { 00:12:16.103 "cntlid": 127, 00:12:16.103 "qid": 0, 00:12:16.103 "state": "enabled", 00:12:16.103 "thread": "nvmf_tgt_poll_group_000", 00:12:16.103 "listen_address": { 00:12:16.103 "trtype": "TCP", 00:12:16.103 "adrfam": "IPv4", 00:12:16.103 "traddr": "10.0.0.2", 00:12:16.103 "trsvcid": "4420" 00:12:16.103 }, 00:12:16.103 "peer_address": { 00:12:16.103 "trtype": "TCP", 00:12:16.103 "adrfam": "IPv4", 00:12:16.103 "traddr": "10.0.0.1", 00:12:16.103 "trsvcid": "39012" 00:12:16.103 }, 00:12:16.103 "auth": { 00:12:16.103 "state": "completed", 00:12:16.103 "digest": "sha512", 00:12:16.103 "dhgroup": "ffdhe4096" 00:12:16.103 } 00:12:16.103 } 00:12:16.103 ]' 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.103 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.103 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.104 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.104 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.104 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.104 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.362 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.316 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.883 00:12:17.883 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.883 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.883 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.141 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.141 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.141 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.141 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.141 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.141 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.141 { 00:12:18.141 "cntlid": 129, 00:12:18.141 "qid": 0, 00:12:18.141 "state": "enabled", 00:12:18.141 "thread": "nvmf_tgt_poll_group_000", 00:12:18.141 "listen_address": { 00:12:18.141 "trtype": "TCP", 00:12:18.141 "adrfam": "IPv4", 00:12:18.141 "traddr": "10.0.0.2", 00:12:18.141 "trsvcid": "4420" 00:12:18.141 }, 00:12:18.141 "peer_address": { 00:12:18.141 "trtype": "TCP", 00:12:18.141 "adrfam": "IPv4", 00:12:18.141 "traddr": "10.0.0.1", 00:12:18.141 "trsvcid": "39026" 00:12:18.141 }, 00:12:18.141 "auth": { 00:12:18.141 "state": "completed", 00:12:18.141 "digest": "sha512", 00:12:18.142 "dhgroup": "ffdhe6144" 00:12:18.142 } 00:12:18.142 } 00:12:18.142 ]' 00:12:18.142 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.142 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.142 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.142 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:18.142 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.399 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.400 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.400 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.657 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:19.252 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:19.510 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:19.510 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.510 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.510 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:19.510 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:19.511 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.511 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.511 12:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.511 12:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.511 12:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.511 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.511 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.769 00:12:20.027 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.027 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.027 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.286 { 00:12:20.286 "cntlid": 131, 00:12:20.286 "qid": 0, 00:12:20.286 "state": "enabled", 00:12:20.286 "thread": "nvmf_tgt_poll_group_000", 00:12:20.286 "listen_address": { 00:12:20.286 "trtype": "TCP", 00:12:20.286 "adrfam": "IPv4", 00:12:20.286 "traddr": "10.0.0.2", 00:12:20.286 "trsvcid": "4420" 00:12:20.286 }, 00:12:20.286 "peer_address": { 00:12:20.286 "trtype": "TCP", 00:12:20.286 "adrfam": "IPv4", 00:12:20.286 "traddr": "10.0.0.1", 00:12:20.286 "trsvcid": "39060" 00:12:20.286 }, 00:12:20.286 "auth": { 00:12:20.286 "state": "completed", 00:12:20.286 "digest": "sha512", 00:12:20.286 "dhgroup": "ffdhe6144" 00:12:20.286 } 00:12:20.286 } 00:12:20.286 ]' 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.286 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.854 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:12:21.423 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.423 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:21.423 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.423 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.423 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.423 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.424 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:21.424 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.680 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.244 00:12:22.244 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.244 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.244 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.502 { 00:12:22.502 "cntlid": 133, 00:12:22.502 "qid": 0, 00:12:22.502 "state": "enabled", 00:12:22.502 "thread": "nvmf_tgt_poll_group_000", 00:12:22.502 "listen_address": { 00:12:22.502 "trtype": "TCP", 00:12:22.502 "adrfam": "IPv4", 00:12:22.502 "traddr": "10.0.0.2", 00:12:22.502 "trsvcid": "4420" 00:12:22.502 }, 00:12:22.502 "peer_address": { 00:12:22.502 "trtype": "TCP", 00:12:22.502 "adrfam": "IPv4", 00:12:22.502 "traddr": "10.0.0.1", 00:12:22.502 "trsvcid": "39094" 00:12:22.502 }, 00:12:22.502 "auth": { 00:12:22.502 "state": "completed", 00:12:22.502 "digest": "sha512", 00:12:22.502 "dhgroup": "ffdhe6144" 00:12:22.502 } 00:12:22.502 } 00:12:22.502 ]' 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.502 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.760 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.694 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.285 00:12:24.285 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.285 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.285 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.544 { 00:12:24.544 "cntlid": 135, 00:12:24.544 "qid": 0, 00:12:24.544 "state": "enabled", 00:12:24.544 "thread": "nvmf_tgt_poll_group_000", 00:12:24.544 "listen_address": { 00:12:24.544 "trtype": "TCP", 00:12:24.544 "adrfam": "IPv4", 00:12:24.544 "traddr": "10.0.0.2", 00:12:24.544 "trsvcid": "4420" 00:12:24.544 }, 00:12:24.544 "peer_address": { 00:12:24.544 "trtype": "TCP", 00:12:24.544 "adrfam": "IPv4", 00:12:24.544 "traddr": "10.0.0.1", 00:12:24.544 "trsvcid": "39130" 00:12:24.544 }, 00:12:24.544 "auth": { 00:12:24.544 "state": "completed", 00:12:24.544 "digest": "sha512", 00:12:24.544 "dhgroup": "ffdhe6144" 00:12:24.544 } 00:12:24.544 } 00:12:24.544 ]' 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.544 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.803 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.803 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.803 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.803 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.803 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.062 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:12:25.630 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.630 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:25.630 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.630 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.888 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:25.888 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.888 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:25.888 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.147 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.714 00:12:26.714 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.714 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.714 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.974 { 00:12:26.974 "cntlid": 137, 00:12:26.974 "qid": 0, 00:12:26.974 "state": "enabled", 00:12:26.974 "thread": "nvmf_tgt_poll_group_000", 00:12:26.974 "listen_address": { 00:12:26.974 "trtype": "TCP", 00:12:26.974 "adrfam": "IPv4", 00:12:26.974 "traddr": "10.0.0.2", 00:12:26.974 "trsvcid": "4420" 00:12:26.974 }, 00:12:26.974 "peer_address": { 00:12:26.974 "trtype": "TCP", 00:12:26.974 "adrfam": "IPv4", 00:12:26.974 "traddr": "10.0.0.1", 00:12:26.974 "trsvcid": "36604" 00:12:26.974 }, 00:12:26.974 "auth": { 00:12:26.974 "state": "completed", 00:12:26.974 "digest": "sha512", 00:12:26.974 "dhgroup": "ffdhe8192" 00:12:26.974 } 00:12:26.974 } 00:12:26.974 ]' 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.974 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.974 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.974 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.974 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.236 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:28.171 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.171 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.172 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.136 00:12:29.136 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.136 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.136 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.392 { 00:12:29.392 "cntlid": 139, 00:12:29.392 "qid": 0, 00:12:29.392 "state": "enabled", 00:12:29.392 "thread": "nvmf_tgt_poll_group_000", 00:12:29.392 "listen_address": { 00:12:29.392 "trtype": "TCP", 00:12:29.392 "adrfam": "IPv4", 00:12:29.392 "traddr": "10.0.0.2", 00:12:29.392 "trsvcid": "4420" 00:12:29.392 }, 00:12:29.392 "peer_address": { 00:12:29.392 "trtype": "TCP", 00:12:29.392 "adrfam": "IPv4", 00:12:29.392 "traddr": "10.0.0.1", 00:12:29.392 "trsvcid": "36630" 00:12:29.392 }, 00:12:29.392 "auth": { 00:12:29.392 "state": "completed", 00:12:29.392 "digest": "sha512", 00:12:29.392 "dhgroup": "ffdhe8192" 00:12:29.392 } 00:12:29.392 } 00:12:29.392 ]' 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.392 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.649 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:01:ZWI3NmI3Njg4Y2UwMzhmNmJjMDFiNjY1YmQ5NzAxZWbUUHZU: --dhchap-ctrl-secret DHHC-1:02:OTM2OTNhZjU2YzAwNjk0OTM1ZjZkNjI1YzkxNWMzMjQ0ZDc4YWI4NTQ5ODhhNzRlzoEXcg==: 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:30.578 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.887 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.451 00:12:31.451 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.451 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.451 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.708 { 00:12:31.708 "cntlid": 141, 00:12:31.708 "qid": 0, 00:12:31.708 "state": "enabled", 00:12:31.708 "thread": "nvmf_tgt_poll_group_000", 00:12:31.708 "listen_address": { 00:12:31.708 "trtype": "TCP", 00:12:31.708 "adrfam": "IPv4", 00:12:31.708 "traddr": "10.0.0.2", 00:12:31.708 "trsvcid": "4420" 00:12:31.708 }, 00:12:31.708 "peer_address": { 00:12:31.708 "trtype": "TCP", 00:12:31.708 "adrfam": "IPv4", 00:12:31.708 "traddr": "10.0.0.1", 00:12:31.708 "trsvcid": "36656" 00:12:31.708 }, 00:12:31.708 "auth": { 00:12:31.708 "state": "completed", 00:12:31.708 "digest": "sha512", 00:12:31.708 "dhgroup": "ffdhe8192" 00:12:31.708 } 00:12:31.708 } 00:12:31.708 ]' 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.708 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.272 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:02:ZmZkMDUyYTgwMWE5MWU0YjJjNDhmZTMxNmY2ZmViYWQxNGJlOTUxOWIyOTUwZjViHbM5xw==: --dhchap-ctrl-secret DHHC-1:01:MDIwNGYwZDk4NzUzOGYxY2Y5NTZmN2NiMDRkYjBhNDnoqKkE: 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.836 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.094 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.660 00:12:33.660 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.660 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.660 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.226 { 00:12:34.226 "cntlid": 143, 00:12:34.226 "qid": 0, 00:12:34.226 "state": "enabled", 00:12:34.226 "thread": "nvmf_tgt_poll_group_000", 00:12:34.226 "listen_address": { 00:12:34.226 "trtype": "TCP", 00:12:34.226 "adrfam": "IPv4", 00:12:34.226 "traddr": "10.0.0.2", 00:12:34.226 "trsvcid": "4420" 00:12:34.226 }, 00:12:34.226 "peer_address": { 00:12:34.226 "trtype": "TCP", 00:12:34.226 "adrfam": "IPv4", 00:12:34.226 "traddr": "10.0.0.1", 00:12:34.226 "trsvcid": "36682" 00:12:34.226 }, 00:12:34.226 "auth": { 00:12:34.226 "state": "completed", 00:12:34.226 "digest": "sha512", 00:12:34.226 "dhgroup": "ffdhe8192" 00:12:34.226 } 00:12:34.226 } 00:12:34.226 ]' 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.226 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.484 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.417 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.675 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.675 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.675 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.240 00:12:36.240 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.240 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.240 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.498 { 00:12:36.498 "cntlid": 145, 00:12:36.498 "qid": 0, 00:12:36.498 "state": "enabled", 00:12:36.498 "thread": "nvmf_tgt_poll_group_000", 00:12:36.498 "listen_address": { 00:12:36.498 "trtype": "TCP", 00:12:36.498 "adrfam": "IPv4", 00:12:36.498 "traddr": "10.0.0.2", 00:12:36.498 "trsvcid": "4420" 00:12:36.498 }, 00:12:36.498 "peer_address": { 00:12:36.498 "trtype": "TCP", 00:12:36.498 "adrfam": "IPv4", 00:12:36.498 "traddr": "10.0.0.1", 00:12:36.498 "trsvcid": "36966" 00:12:36.498 }, 00:12:36.498 "auth": { 00:12:36.498 "state": "completed", 00:12:36.498 "digest": "sha512", 00:12:36.498 "dhgroup": "ffdhe8192" 00:12:36.498 } 00:12:36.498 } 00:12:36.498 ]' 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.498 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.756 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:00:YTZkN2RjMjJjZTMxODc4ZjZhYzQ4M2QxODY4YTU5MmYwMjA1YmEwM2Q2OWIxNDMxxgwS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjhiMmE4NmYwODgxNDY5ZGZiZDVmYmVmNjI3OTA1NGYyYTk1ZjVmODIzMjFhNjAxOGQzYWQwYzFjMzJjYzllMZee0Jg=: 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:37.743 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:38.309 request: 00:12:38.309 { 00:12:38.309 "name": "nvme0", 00:12:38.309 "trtype": "tcp", 00:12:38.309 "traddr": "10.0.0.2", 00:12:38.309 "adrfam": "ipv4", 00:12:38.309 "trsvcid": "4420", 00:12:38.309 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:38.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88", 00:12:38.309 "prchk_reftag": false, 00:12:38.309 "prchk_guard": false, 00:12:38.309 "hdgst": false, 00:12:38.309 "ddgst": false, 00:12:38.309 "dhchap_key": "key2", 00:12:38.309 "method": "bdev_nvme_attach_controller", 00:12:38.309 "req_id": 1 00:12:38.309 } 00:12:38.309 Got JSON-RPC error response 00:12:38.309 response: 00:12:38.309 { 00:12:38.309 "code": -5, 00:12:38.309 "message": "Input/output error" 00:12:38.309 } 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:38.309 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:38.874 request: 00:12:38.874 { 00:12:38.874 "name": "nvme0", 00:12:38.874 "trtype": "tcp", 00:12:38.874 "traddr": "10.0.0.2", 00:12:38.874 "adrfam": "ipv4", 00:12:38.874 "trsvcid": "4420", 00:12:38.874 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:38.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88", 00:12:38.874 "prchk_reftag": false, 00:12:38.874 "prchk_guard": false, 00:12:38.874 "hdgst": false, 00:12:38.874 "ddgst": false, 00:12:38.874 "dhchap_key": "key1", 00:12:38.874 "dhchap_ctrlr_key": "ckey2", 00:12:38.874 "method": "bdev_nvme_attach_controller", 00:12:38.874 "req_id": 1 00:12:38.874 } 00:12:38.874 Got JSON-RPC error response 00:12:38.874 response: 00:12:38.874 { 00:12:38.874 "code": -5, 00:12:38.874 "message": "Input/output error" 00:12:38.874 } 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key1 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.874 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.441 request: 00:12:39.441 { 00:12:39.441 "name": "nvme0", 00:12:39.441 "trtype": "tcp", 00:12:39.441 "traddr": "10.0.0.2", 00:12:39.441 "adrfam": "ipv4", 00:12:39.441 "trsvcid": "4420", 00:12:39.441 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:39.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88", 00:12:39.441 "prchk_reftag": false, 00:12:39.441 "prchk_guard": false, 00:12:39.441 "hdgst": false, 00:12:39.441 "ddgst": false, 00:12:39.441 "dhchap_key": "key1", 00:12:39.441 "dhchap_ctrlr_key": "ckey1", 00:12:39.441 "method": "bdev_nvme_attach_controller", 00:12:39.441 "req_id": 1 00:12:39.441 } 00:12:39.441 Got JSON-RPC error response 00:12:39.441 response: 00:12:39.441 { 00:12:39.441 "code": -5, 00:12:39.441 "message": "Input/output error" 00:12:39.441 } 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69289 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69289 ']' 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69289 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69289 00:12:39.441 killing process with pid 69289 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69289' 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69289 00:12:39.441 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69289 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72345 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72345 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72345 ']' 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.699 12:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72345 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72345 ']' 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.634 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.893 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.893 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:40.893 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:40.893 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.893 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.151 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.718 00:12:41.718 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.718 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.718 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.977 { 00:12:41.977 "cntlid": 1, 00:12:41.977 "qid": 0, 00:12:41.977 "state": "enabled", 00:12:41.977 "thread": "nvmf_tgt_poll_group_000", 00:12:41.977 "listen_address": { 00:12:41.977 "trtype": "TCP", 00:12:41.977 "adrfam": "IPv4", 00:12:41.977 "traddr": "10.0.0.2", 00:12:41.977 "trsvcid": "4420" 00:12:41.977 }, 00:12:41.977 "peer_address": { 00:12:41.977 "trtype": "TCP", 00:12:41.977 "adrfam": "IPv4", 00:12:41.977 "traddr": "10.0.0.1", 00:12:41.977 "trsvcid": "37012" 00:12:41.977 }, 00:12:41.977 "auth": { 00:12:41.977 "state": "completed", 00:12:41.977 "digest": "sha512", 00:12:41.977 "dhgroup": "ffdhe8192" 00:12:41.977 } 00:12:41.977 } 00:12:41.977 ]' 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:41.977 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.236 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.236 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.236 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.494 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-secret DHHC-1:03:NmYxNjc1MDA0ZTU4YTBjODU0MDgwYTViNDEzMDE2NzA1YjI2N2UzOTI4Y2Q3NzU5MGM4NTM1YzVhMmQ4MmNmYc5jHRY=: 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --dhchap-key key3 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:43.060 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.319 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.576 request: 00:12:43.576 { 00:12:43.576 "name": "nvme0", 00:12:43.576 "trtype": "tcp", 00:12:43.576 "traddr": "10.0.0.2", 00:12:43.576 "adrfam": "ipv4", 00:12:43.576 "trsvcid": "4420", 00:12:43.576 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:43.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88", 00:12:43.576 "prchk_reftag": false, 00:12:43.576 "prchk_guard": false, 00:12:43.576 "hdgst": false, 00:12:43.576 "ddgst": false, 00:12:43.576 "dhchap_key": "key3", 00:12:43.576 "method": "bdev_nvme_attach_controller", 00:12:43.576 "req_id": 1 00:12:43.576 } 00:12:43.576 Got JSON-RPC error response 00:12:43.576 response: 00:12:43.576 { 00:12:43.576 "code": -5, 00:12:43.576 "message": "Input/output error" 00:12:43.576 } 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:43.576 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.834 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.092 request: 00:12:44.092 { 00:12:44.092 "name": "nvme0", 00:12:44.092 "trtype": "tcp", 00:12:44.092 "traddr": "10.0.0.2", 00:12:44.092 "adrfam": "ipv4", 00:12:44.092 "trsvcid": "4420", 00:12:44.092 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:44.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88", 00:12:44.092 "prchk_reftag": false, 00:12:44.092 "prchk_guard": false, 00:12:44.092 "hdgst": false, 00:12:44.092 "ddgst": false, 00:12:44.092 "dhchap_key": "key3", 00:12:44.092 "method": "bdev_nvme_attach_controller", 00:12:44.092 "req_id": 1 00:12:44.092 } 00:12:44.092 Got JSON-RPC error response 00:12:44.092 response: 00:12:44.092 { 00:12:44.092 "code": -5, 00:12:44.092 "message": "Input/output error" 00:12:44.092 } 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.092 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:44.351 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:44.610 request: 00:12:44.610 { 00:12:44.610 "name": "nvme0", 00:12:44.610 "trtype": "tcp", 00:12:44.610 "traddr": "10.0.0.2", 00:12:44.610 "adrfam": "ipv4", 00:12:44.610 "trsvcid": "4420", 00:12:44.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:44.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88", 00:12:44.610 "prchk_reftag": false, 00:12:44.610 "prchk_guard": false, 00:12:44.610 "hdgst": false, 00:12:44.610 "ddgst": false, 00:12:44.610 "dhchap_key": "key0", 00:12:44.610 "dhchap_ctrlr_key": "key1", 00:12:44.610 "method": "bdev_nvme_attach_controller", 00:12:44.610 "req_id": 1 00:12:44.610 } 00:12:44.610 Got JSON-RPC error response 00:12:44.610 response: 00:12:44.610 { 00:12:44.610 "code": -5, 00:12:44.610 "message": "Input/output error" 00:12:44.610 } 00:12:44.610 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:44.610 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:44.610 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:44.610 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:44.610 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:44.610 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:44.867 00:12:44.867 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:44.867 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.867 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:45.126 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.126 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.126 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.384 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69327 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69327 ']' 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69327 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69327 00:12:45.385 killing process with pid 69327 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69327' 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69327 00:12:45.385 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69327 00:12:45.644 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:45.644 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.644 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.903 rmmod nvme_tcp 00:12:45.903 rmmod nvme_fabrics 00:12:45.903 rmmod nvme_keyring 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72345 ']' 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72345 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72345 ']' 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72345 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72345 00:12:45.903 killing process with pid 72345 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72345' 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72345 00:12:45.903 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72345 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oxf /tmp/spdk.key-sha256.zrP /tmp/spdk.key-sha384.xYM /tmp/spdk.key-sha512.qqX /tmp/spdk.key-sha512.oUr /tmp/spdk.key-sha384.Mhv /tmp/spdk.key-sha256.TPr '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:46.161 00:12:46.161 real 2m50.364s 00:12:46.161 user 6m46.194s 00:12:46.161 sys 0m26.734s 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.161 12:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.161 ************************************ 00:12:46.161 END TEST nvmf_auth_target 00:12:46.161 ************************************ 00:12:46.161 12:55:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:46.161 12:55:02 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:12:46.161 12:55:02 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:46.161 12:55:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:46.161 12:55:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.161 12:55:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.161 ************************************ 00:12:46.161 START TEST nvmf_bdevio_no_huge 00:12:46.161 ************************************ 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:46.161 * Looking for test storage... 00:12:46.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.161 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.162 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:46.420 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:46.421 Cannot find device "nvmf_tgt_br" 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.421 Cannot find device "nvmf_tgt_br2" 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:46.421 Cannot find device "nvmf_tgt_br" 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:46.421 Cannot find device "nvmf_tgt_br2" 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.421 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:46.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:12:46.681 00:12:46.681 --- 10.0.0.2 ping statistics --- 00:12:46.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.681 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:46.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:46.681 00:12:46.681 --- 10.0.0.3 ping statistics --- 00:12:46.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.681 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:46.681 00:12:46.681 --- 10.0.0.1 ping statistics --- 00:12:46.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.681 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72657 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72657 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72657 ']' 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.681 12:55:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.681 [2024-07-15 12:55:02.606486] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:46.681 [2024-07-15 12:55:02.606579] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:46.949 [2024-07-15 12:55:02.756605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.949 [2024-07-15 12:55:02.876382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.949 [2024-07-15 12:55:02.876878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.949 [2024-07-15 12:55:02.877228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.949 [2024-07-15 12:55:02.877794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.949 [2024-07-15 12:55:02.878014] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.949 [2024-07-15 12:55:02.878413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:46.949 [2024-07-15 12:55:02.878489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:46.949 [2024-07-15 12:55:02.878616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:46.949 [2024-07-15 12:55:02.878624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.949 [2024-07-15 12:55:02.883760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.881 [2024-07-15 12:55:03.649496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.881 Malloc0 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.881 [2024-07-15 12:55:03.693877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:47.881 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:47.882 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:47.882 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:47.882 { 00:12:47.882 "params": { 00:12:47.882 "name": "Nvme$subsystem", 00:12:47.882 "trtype": "$TEST_TRANSPORT", 00:12:47.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.882 "adrfam": "ipv4", 00:12:47.882 "trsvcid": "$NVMF_PORT", 00:12:47.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.882 "hdgst": ${hdgst:-false}, 00:12:47.882 "ddgst": ${ddgst:-false} 00:12:47.882 }, 00:12:47.882 "method": "bdev_nvme_attach_controller" 00:12:47.882 } 00:12:47.882 EOF 00:12:47.882 )") 00:12:47.882 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:47.882 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:47.882 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:47.882 12:55:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:47.882 "params": { 00:12:47.882 "name": "Nvme1", 00:12:47.882 "trtype": "tcp", 00:12:47.882 "traddr": "10.0.0.2", 00:12:47.882 "adrfam": "ipv4", 00:12:47.882 "trsvcid": "4420", 00:12:47.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.882 "hdgst": false, 00:12:47.882 "ddgst": false 00:12:47.882 }, 00:12:47.882 "method": "bdev_nvme_attach_controller" 00:12:47.882 }' 00:12:47.882 [2024-07-15 12:55:03.737329] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:47.882 [2024-07-15 12:55:03.737424] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72701 ] 00:12:47.882 [2024-07-15 12:55:03.872483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:48.140 [2024-07-15 12:55:03.988474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.140 [2024-07-15 12:55:03.988578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.140 [2024-07-15 12:55:03.988579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.140 [2024-07-15 12:55:04.001800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:48.140 I/O targets: 00:12:48.140 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:48.140 00:12:48.140 00:12:48.140 CUnit - A unit testing framework for C - Version 2.1-3 00:12:48.140 http://cunit.sourceforge.net/ 00:12:48.140 00:12:48.140 00:12:48.140 Suite: bdevio tests on: Nvme1n1 00:12:48.140 Test: blockdev write read block ...passed 00:12:48.140 Test: blockdev write zeroes read block ...passed 00:12:48.140 Test: blockdev write zeroes read no split ...passed 00:12:48.140 Test: blockdev write zeroes read split ...passed 00:12:48.140 Test: blockdev write zeroes read split partial ...passed 00:12:48.140 Test: blockdev reset ...[2024-07-15 12:55:04.194743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:48.140 [2024-07-15 12:55:04.195070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2196870 (9): Bad file descriptor 00:12:48.399 [2024-07-15 12:55:04.212134] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:48.399 passed 00:12:48.399 Test: blockdev write read 8 blocks ...passed 00:12:48.399 Test: blockdev write read size > 128k ...passed 00:12:48.399 Test: blockdev write read invalid size ...passed 00:12:48.399 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.399 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.399 Test: blockdev write read max offset ...passed 00:12:48.399 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.399 Test: blockdev writev readv 8 blocks ...passed 00:12:48.399 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.399 Test: blockdev writev readv block ...passed 00:12:48.399 Test: blockdev writev readv size > 128k ...passed 00:12:48.399 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.400 Test: blockdev comparev and writev ...[2024-07-15 12:55:04.223725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.223981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.224018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.224035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.224352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.224403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.224426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.224447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.224797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.224834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.224868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.224891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.225258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.225290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.225315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.400 [2024-07-15 12:55:04.225335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:48.400 passed 00:12:48.400 Test: blockdev nvme passthru rw ...passed 00:12:48.400 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:55:04.226401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.400 [2024-07-15 12:55:04.226444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.226594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.400 [2024-07-15 12:55:04.226635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:48.400 [2024-07-15 12:55:04.226787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.400 [2024-07-15 12:55:04.226824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:48.400 passed 00:12:48.400 Test: blockdev nvme admin passthru ...[2024-07-15 12:55:04.226973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.400 [2024-07-15 12:55:04.227008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:48.400 passed 00:12:48.400 Test: blockdev copy ...passed 00:12:48.400 00:12:48.400 Run Summary: Type Total Ran Passed Failed Inactive 00:12:48.400 suites 1 1 n/a 0 0 00:12:48.400 tests 23 23 23 0 0 00:12:48.400 asserts 152 152 152 0 n/a 00:12:48.400 00:12:48.400 Elapsed time = 0.181 seconds 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.658 rmmod nvme_tcp 00:12:48.658 rmmod nvme_fabrics 00:12:48.658 rmmod nvme_keyring 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72657 ']' 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72657 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72657 ']' 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72657 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72657 00:12:48.658 killing process with pid 72657 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72657' 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72657 00:12:48.658 12:55:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72657 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.223 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.224 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:49.224 ************************************ 00:12:49.224 END TEST nvmf_bdevio_no_huge 00:12:49.224 ************************************ 00:12:49.224 00:12:49.224 real 0m3.044s 00:12:49.224 user 0m10.004s 00:12:49.224 sys 0m1.181s 00:12:49.224 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.224 12:55:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.224 12:55:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:49.224 12:55:05 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:49.224 12:55:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.224 12:55:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.224 12:55:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.224 ************************************ 00:12:49.224 START TEST nvmf_tls 00:12:49.224 ************************************ 00:12:49.224 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:49.224 * Looking for test storage... 00:12:49.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.481 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:49.482 Cannot find device "nvmf_tgt_br" 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.482 Cannot find device "nvmf_tgt_br2" 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:49.482 Cannot find device "nvmf_tgt_br" 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:49.482 Cannot find device "nvmf_tgt_br2" 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:49.482 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.739 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:49.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:49.739 00:12:49.739 --- 10.0.0.2 ping statistics --- 00:12:49.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.740 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:49.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:12:49.740 00:12:49.740 --- 10.0.0.3 ping statistics --- 00:12:49.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.740 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:49.740 00:12:49.740 --- 10.0.0.1 ping statistics --- 00:12:49.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.740 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72881 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72881 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72881 ']' 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.740 12:55:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:49.740 [2024-07-15 12:55:05.711639] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:49.740 [2024-07-15 12:55:05.711722] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.997 [2024-07-15 12:55:05.847970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.997 [2024-07-15 12:55:05.955069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.997 [2024-07-15 12:55:05.955125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.997 [2024-07-15 12:55:05.955138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.997 [2024-07-15 12:55:05.955147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.997 [2024-07-15 12:55:05.955155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.997 [2024-07-15 12:55:05.955193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:50.928 true 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:50.928 12:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:51.185 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:51.185 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:51.185 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:51.441 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:51.441 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:51.699 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:51.699 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:51.699 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:51.972 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:51.972 12:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:52.234 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:52.234 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:52.234 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.234 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:52.491 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:52.491 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:52.491 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:52.749 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.749 12:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:53.007 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:53.007 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:53.007 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:53.264 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.264 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:53.522 12:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:53.780 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:53.780 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:53.780 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.q0GHGL75iV 00:12:53.780 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:53.780 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.pr8aFiVbXu 00:12:53.780 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:53.781 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:53.781 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.q0GHGL75iV 00:12:53.781 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pr8aFiVbXu 00:12:53.781 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:54.038 12:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:54.296 [2024-07-15 12:55:10.147875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:54.296 12:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.q0GHGL75iV 00:12:54.296 12:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q0GHGL75iV 00:12:54.296 12:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:54.555 [2024-07-15 12:55:10.404481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.555 12:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:54.813 12:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:54.813 [2024-07-15 12:55:10.856565] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:54.813 [2024-07-15 12:55:10.856808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.071 12:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:55.071 malloc0 00:12:55.071 12:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:55.329 12:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q0GHGL75iV 00:12:55.587 [2024-07-15 12:55:11.547855] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:55.587 12:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.q0GHGL75iV 00:13:07.788 Initializing NVMe Controllers 00:13:07.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:07.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:07.788 Initialization complete. Launching workers. 00:13:07.788 ======================================================== 00:13:07.788 Latency(us) 00:13:07.788 Device Information : IOPS MiB/s Average min max 00:13:07.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9594.59 37.48 6672.06 1176.83 7446.88 00:13:07.788 ======================================================== 00:13:07.788 Total : 9594.59 37.48 6672.06 1176.83 7446.88 00:13:07.788 00:13:07.788 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q0GHGL75iV 00:13:07.788 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:07.788 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:07.788 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:07.788 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.q0GHGL75iV' 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73107 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73107 /var/tmp/bdevperf.sock 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73107 ']' 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.789 12:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.789 [2024-07-15 12:55:21.813113] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:07.789 [2024-07-15 12:55:21.813391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73107 ] 00:13:07.789 [2024-07-15 12:55:21.955145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.789 [2024-07-15 12:55:22.068455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.789 [2024-07-15 12:55:22.124457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:07.789 12:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.789 12:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:07.789 12:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q0GHGL75iV 00:13:07.789 [2024-07-15 12:55:22.974902] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:07.789 [2024-07-15 12:55:22.975037] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:07.789 TLSTESTn1 00:13:07.789 12:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:07.789 Running I/O for 10 seconds... 00:13:17.754 00:13:17.754 Latency(us) 00:13:17.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.754 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:17.754 Verification LBA range: start 0x0 length 0x2000 00:13:17.754 TLSTESTn1 : 10.02 4047.51 15.81 0.00 0.00 31562.90 6523.81 28955.00 00:13:17.754 =================================================================================================================== 00:13:17.754 Total : 4047.51 15.81 0.00 0.00 31562.90 6523.81 28955.00 00:13:17.754 0 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73107 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73107 ']' 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73107 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73107 00:13:17.754 killing process with pid 73107 00:13:17.754 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.754 00:13:17.754 Latency(us) 00:13:17.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.754 =================================================================================================================== 00:13:17.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73107' 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73107 00:13:17.754 [2024-07-15 12:55:33.218173] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73107 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pr8aFiVbXu 00:13:17.754 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pr8aFiVbXu 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:17.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pr8aFiVbXu 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pr8aFiVbXu' 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73245 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73245 /var/tmp/bdevperf.sock 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73245 ']' 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.755 12:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.755 [2024-07-15 12:55:33.490047] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:17.755 [2024-07-15 12:55:33.490663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73245 ] 00:13:17.755 [2024-07-15 12:55:33.630086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.755 [2024-07-15 12:55:33.732701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.755 [2024-07-15 12:55:33.785001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pr8aFiVbXu 00:13:18.706 [2024-07-15 12:55:34.711520] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:18.706 [2024-07-15 12:55:34.711833] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:18.706 [2024-07-15 12:55:34.720511] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:18.706 [2024-07-15 12:55:34.721459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7131f0 (107): Transport endpoint is not connected 00:13:18.706 [2024-07-15 12:55:34.722452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7131f0 (9): Bad file descriptor 00:13:18.706 [2024-07-15 12:55:34.723447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:18.706 [2024-07-15 12:55:34.723601] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:18.706 [2024-07-15 12:55:34.723712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:18.706 request: 00:13:18.706 { 00:13:18.706 "name": "TLSTEST", 00:13:18.706 "trtype": "tcp", 00:13:18.706 "traddr": "10.0.0.2", 00:13:18.706 "adrfam": "ipv4", 00:13:18.706 "trsvcid": "4420", 00:13:18.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.706 "prchk_reftag": false, 00:13:18.706 "prchk_guard": false, 00:13:18.706 "hdgst": false, 00:13:18.706 "ddgst": false, 00:13:18.706 "psk": "/tmp/tmp.pr8aFiVbXu", 00:13:18.706 "method": "bdev_nvme_attach_controller", 00:13:18.706 "req_id": 1 00:13:18.706 } 00:13:18.706 Got JSON-RPC error response 00:13:18.706 response: 00:13:18.706 { 00:13:18.706 "code": -5, 00:13:18.706 "message": "Input/output error" 00:13:18.706 } 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73245 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73245 ']' 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73245 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.706 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73245 00:13:18.965 killing process with pid 73245 00:13:18.965 Received shutdown signal, test time was about 10.000000 seconds 00:13:18.965 00:13:18.965 Latency(us) 00:13:18.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.965 =================================================================================================================== 00:13:18.965 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73245' 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73245 00:13:18.965 [2024-07-15 12:55:34.767797] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73245 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q0GHGL75iV 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q0GHGL75iV 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q0GHGL75iV 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.q0GHGL75iV' 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73268 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73268 /var/tmp/bdevperf.sock 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73268 ']' 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.965 12:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.223 [2024-07-15 12:55:35.035918] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:19.223 [2024-07-15 12:55:35.036222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73268 ] 00:13:19.223 [2024-07-15 12:55:35.174028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.223 [2024-07-15 12:55:35.279618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.482 [2024-07-15 12:55:35.331229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:20.070 12:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.070 12:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:20.070 12:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.q0GHGL75iV 00:13:20.328 [2024-07-15 12:55:36.183180] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.328 [2024-07-15 12:55:36.183332] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:20.328 [2024-07-15 12:55:36.190550] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:20.328 [2024-07-15 12:55:36.190731] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:20.328 [2024-07-15 12:55:36.190793] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:20.328 [2024-07-15 12:55:36.191088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13941f0 (107): Transport endpoint is not connected 00:13:20.328 [2024-07-15 12:55:36.192078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13941f0 (9): Bad file descriptor 00:13:20.328 [2024-07-15 12:55:36.193075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:20.328 [2024-07-15 12:55:36.193100] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:20.328 [2024-07-15 12:55:36.193115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:20.328 request: 00:13:20.328 { 00:13:20.328 "name": "TLSTEST", 00:13:20.328 "trtype": "tcp", 00:13:20.328 "traddr": "10.0.0.2", 00:13:20.328 "adrfam": "ipv4", 00:13:20.328 "trsvcid": "4420", 00:13:20.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.328 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:20.328 "prchk_reftag": false, 00:13:20.328 "prchk_guard": false, 00:13:20.328 "hdgst": false, 00:13:20.328 "ddgst": false, 00:13:20.328 "psk": "/tmp/tmp.q0GHGL75iV", 00:13:20.328 "method": "bdev_nvme_attach_controller", 00:13:20.328 "req_id": 1 00:13:20.328 } 00:13:20.328 Got JSON-RPC error response 00:13:20.328 response: 00:13:20.328 { 00:13:20.328 "code": -5, 00:13:20.328 "message": "Input/output error" 00:13:20.328 } 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73268 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73268 ']' 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73268 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73268 00:13:20.328 killing process with pid 73268 00:13:20.328 Received shutdown signal, test time was about 10.000000 seconds 00:13:20.328 00:13:20.328 Latency(us) 00:13:20.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.328 =================================================================================================================== 00:13:20.328 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73268' 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73268 00:13:20.328 [2024-07-15 12:55:36.246035] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:20.328 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73268 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q0GHGL75iV 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q0GHGL75iV 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:20.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q0GHGL75iV 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.q0GHGL75iV' 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73296 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73296 /var/tmp/bdevperf.sock 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73296 ']' 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.587 12:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.587 [2024-07-15 12:55:36.530593] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:20.587 [2024-07-15 12:55:36.530694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73296 ] 00:13:20.844 [2024-07-15 12:55:36.670500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.844 [2024-07-15 12:55:36.775460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.844 [2024-07-15 12:55:36.827668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.780 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.780 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:21.780 12:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q0GHGL75iV 00:13:22.039 [2024-07-15 12:55:37.853684] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:22.039 [2024-07-15 12:55:37.854088] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:22.039 [2024-07-15 12:55:37.860784] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:22.039 [2024-07-15 12:55:37.860989] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:22.039 [2024-07-15 12:55:37.861054] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:22.039 [2024-07-15 12:55:37.861764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103b1f0 (107): Transport endpoint is not connected 00:13:22.039 [2024-07-15 12:55:37.862755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103b1f0 (9): Bad file descriptor 00:13:22.039 [2024-07-15 12:55:37.863751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:22.039 [2024-07-15 12:55:37.863774] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:22.039 [2024-07-15 12:55:37.863789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:22.039 request: 00:13:22.039 { 00:13:22.039 "name": "TLSTEST", 00:13:22.039 "trtype": "tcp", 00:13:22.039 "traddr": "10.0.0.2", 00:13:22.039 "adrfam": "ipv4", 00:13:22.039 "trsvcid": "4420", 00:13:22.039 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:22.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.039 "prchk_reftag": false, 00:13:22.039 "prchk_guard": false, 00:13:22.039 "hdgst": false, 00:13:22.039 "ddgst": false, 00:13:22.039 "psk": "/tmp/tmp.q0GHGL75iV", 00:13:22.039 "method": "bdev_nvme_attach_controller", 00:13:22.039 "req_id": 1 00:13:22.040 } 00:13:22.040 Got JSON-RPC error response 00:13:22.040 response: 00:13:22.040 { 00:13:22.040 "code": -5, 00:13:22.040 "message": "Input/output error" 00:13:22.040 } 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73296 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73296 ']' 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73296 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73296 00:13:22.040 killing process with pid 73296 00:13:22.040 Received shutdown signal, test time was about 10.000000 seconds 00:13:22.040 00:13:22.040 Latency(us) 00:13:22.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.040 =================================================================================================================== 00:13:22.040 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73296' 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73296 00:13:22.040 [2024-07-15 12:55:37.916950] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:22.040 12:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73296 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:22.298 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73323 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73323 /var/tmp/bdevperf.sock 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73323 ']' 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.299 12:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.299 [2024-07-15 12:55:38.172678] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:22.299 [2024-07-15 12:55:38.172928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73323 ] 00:13:22.299 [2024-07-15 12:55:38.306595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.608 [2024-07-15 12:55:38.418249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.608 [2024-07-15 12:55:38.470816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:23.192 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.192 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:23.192 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:23.466 [2024-07-15 12:55:39.390968] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:23.466 [2024-07-15 12:55:39.392694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a82c00 (9): Bad file descriptor 00:13:23.466 [2024-07-15 12:55:39.393689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:23.466 [2024-07-15 12:55:39.393714] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:23.466 [2024-07-15 12:55:39.393730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:23.466 request: 00:13:23.466 { 00:13:23.466 "name": "TLSTEST", 00:13:23.466 "trtype": "tcp", 00:13:23.466 "traddr": "10.0.0.2", 00:13:23.466 "adrfam": "ipv4", 00:13:23.466 "trsvcid": "4420", 00:13:23.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.466 "prchk_reftag": false, 00:13:23.466 "prchk_guard": false, 00:13:23.466 "hdgst": false, 00:13:23.466 "ddgst": false, 00:13:23.466 "method": "bdev_nvme_attach_controller", 00:13:23.466 "req_id": 1 00:13:23.466 } 00:13:23.466 Got JSON-RPC error response 00:13:23.466 response: 00:13:23.466 { 00:13:23.466 "code": -5, 00:13:23.466 "message": "Input/output error" 00:13:23.466 } 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73323 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73323 ']' 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73323 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73323 00:13:23.466 killing process with pid 73323 00:13:23.466 Received shutdown signal, test time was about 10.000000 seconds 00:13:23.466 00:13:23.466 Latency(us) 00:13:23.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.466 =================================================================================================================== 00:13:23.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73323' 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73323 00:13:23.466 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73323 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72881 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72881 ']' 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72881 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72881 00:13:23.724 killing process with pid 72881 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72881' 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72881 00:13:23.724 [2024-07-15 12:55:39.672834] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:23.724 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72881 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:23.982 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.m8Wdz1ev8E 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.m8Wdz1ev8E 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73361 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73361 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73361 ']' 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.983 12:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.983 [2024-07-15 12:55:40.010444] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:23.983 [2024-07-15 12:55:40.010739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.241 [2024-07-15 12:55:40.146697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.241 [2024-07-15 12:55:40.268579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.241 [2024-07-15 12:55:40.268636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.241 [2024-07-15 12:55:40.268650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.241 [2024-07-15 12:55:40.268661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.241 [2024-07-15 12:55:40.268670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.241 [2024-07-15 12:55:40.268705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.499 [2024-07-15 12:55:40.321369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.m8Wdz1ev8E 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m8Wdz1ev8E 00:13:25.064 12:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:25.342 [2024-07-15 12:55:41.186135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.342 12:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:25.600 12:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:25.858 [2024-07-15 12:55:41.762233] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:25.858 [2024-07-15 12:55:41.762518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.858 12:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:26.115 malloc0 00:13:26.115 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:26.372 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m8Wdz1ev8E 00:13:26.629 [2024-07-15 12:55:42.629738] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:26.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m8Wdz1ev8E 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m8Wdz1ev8E' 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73420 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73420 /var/tmp/bdevperf.sock 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73420 ']' 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.629 12:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.887 [2024-07-15 12:55:42.719751] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:26.887 [2024-07-15 12:55:42.720161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73420 ] 00:13:26.887 [2024-07-15 12:55:42.862139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.144 [2024-07-15 12:55:42.967121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.144 [2024-07-15 12:55:43.021585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:27.709 12:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.709 12:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:27.709 12:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m8Wdz1ev8E 00:13:27.967 [2024-07-15 12:55:43.872889] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.967 [2024-07-15 12:55:43.873320] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:27.967 TLSTESTn1 00:13:27.967 12:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:28.224 Running I/O for 10 seconds... 00:13:38.285 00:13:38.285 Latency(us) 00:13:38.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.285 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:38.285 Verification LBA range: start 0x0 length 0x2000 00:13:38.285 TLSTESTn1 : 10.03 3981.30 15.55 0.00 0.00 32086.22 7238.75 28716.68 00:13:38.285 =================================================================================================================== 00:13:38.285 Total : 3981.30 15.55 0.00 0.00 32086.22 7238.75 28716.68 00:13:38.285 0 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73420 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73420 ']' 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73420 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73420 00:13:38.285 killing process with pid 73420 00:13:38.285 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.285 00:13:38.285 Latency(us) 00:13:38.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.285 =================================================================================================================== 00:13:38.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73420' 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73420 00:13:38.285 [2024-07-15 12:55:54.151839] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:38.285 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73420 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.m8Wdz1ev8E 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m8Wdz1ev8E 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m8Wdz1ev8E 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m8Wdz1ev8E 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m8Wdz1ev8E' 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73550 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73550 /var/tmp/bdevperf.sock 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73550 ']' 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:38.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.543 12:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.543 [2024-07-15 12:55:54.423393] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:38.543 [2024-07-15 12:55:54.423675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73550 ] 00:13:38.543 [2024-07-15 12:55:54.558882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.801 [2024-07-15 12:55:54.669002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.801 [2024-07-15 12:55:54.722478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.365 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.365 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:39.365 12:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m8Wdz1ev8E 00:13:39.623 [2024-07-15 12:55:55.672244] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:39.623 [2024-07-15 12:55:55.672537] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:39.623 [2024-07-15 12:55:55.672667] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.m8Wdz1ev8E 00:13:39.623 request: 00:13:39.623 { 00:13:39.623 "name": "TLSTEST", 00:13:39.623 "trtype": "tcp", 00:13:39.623 "traddr": "10.0.0.2", 00:13:39.623 "adrfam": "ipv4", 00:13:39.623 "trsvcid": "4420", 00:13:39.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.623 "prchk_reftag": false, 00:13:39.623 "prchk_guard": false, 00:13:39.623 "hdgst": false, 00:13:39.623 "ddgst": false, 00:13:39.623 "psk": "/tmp/tmp.m8Wdz1ev8E", 00:13:39.623 "method": "bdev_nvme_attach_controller", 00:13:39.623 "req_id": 1 00:13:39.623 } 00:13:39.623 Got JSON-RPC error response 00:13:39.623 response: 00:13:39.623 { 00:13:39.623 "code": -1, 00:13:39.623 "message": "Operation not permitted" 00:13:39.623 } 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73550 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73550 ']' 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73550 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73550 00:13:39.881 killing process with pid 73550 00:13:39.881 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.881 00:13:39.881 Latency(us) 00:13:39.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.881 =================================================================================================================== 00:13:39.881 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73550' 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73550 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73550 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73361 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73361 ']' 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73361 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.881 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73361 00:13:40.140 killing process with pid 73361 00:13:40.140 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:40.140 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:40.140 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73361' 00:13:40.140 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73361 00:13:40.140 [2024-07-15 12:55:55.956476] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:40.140 12:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73361 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73583 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73583 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73583 ']' 00:13:40.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.140 12:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.398 [2024-07-15 12:55:56.231846] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:40.398 [2024-07-15 12:55:56.231924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.398 [2024-07-15 12:55:56.366557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.665 [2024-07-15 12:55:56.472926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.665 [2024-07-15 12:55:56.473182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.665 [2024-07-15 12:55:56.473202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.665 [2024-07-15 12:55:56.473211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.665 [2024-07-15 12:55:56.473220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.665 [2024-07-15 12:55:56.473254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.665 [2024-07-15 12:55:56.525815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.m8Wdz1ev8E 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.m8Wdz1ev8E 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.m8Wdz1ev8E 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m8Wdz1ev8E 00:13:41.231 12:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:41.489 [2024-07-15 12:55:57.509407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.489 12:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:41.779 12:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:42.047 [2024-07-15 12:55:58.049532] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:42.047 [2024-07-15 12:55:58.049755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.047 12:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:42.307 malloc0 00:13:42.307 12:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:42.568 12:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m8Wdz1ev8E 00:13:42.843 [2024-07-15 12:55:58.832876] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:42.843 [2024-07-15 12:55:58.832933] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:42.843 [2024-07-15 12:55:58.832967] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:42.843 request: 00:13:42.843 { 00:13:42.843 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.843 "host": "nqn.2016-06.io.spdk:host1", 00:13:42.843 "psk": "/tmp/tmp.m8Wdz1ev8E", 00:13:42.843 "method": "nvmf_subsystem_add_host", 00:13:42.843 "req_id": 1 00:13:42.843 } 00:13:42.843 Got JSON-RPC error response 00:13:42.843 response: 00:13:42.843 { 00:13:42.843 "code": -32603, 00:13:42.843 "message": "Internal error" 00:13:42.843 } 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73583 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73583 ']' 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73583 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73583 00:13:42.843 killing process with pid 73583 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73583' 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73583 00:13:42.843 12:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73583 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.m8Wdz1ev8E 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73652 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73652 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73652 ']' 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.134 12:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.134 [2024-07-15 12:55:59.175029] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:43.134 [2024-07-15 12:55:59.175347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.393 [2024-07-15 12:55:59.313931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.393 [2024-07-15 12:55:59.414689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.393 [2024-07-15 12:55:59.414760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.393 [2024-07-15 12:55:59.414787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.393 [2024-07-15 12:55:59.414796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.393 [2024-07-15 12:55:59.414803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.393 [2024-07-15 12:55:59.414826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.651 [2024-07-15 12:55:59.467850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.m8Wdz1ev8E 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m8Wdz1ev8E 00:13:44.219 12:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:44.486 [2024-07-15 12:56:00.447008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.486 12:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:44.745 12:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:45.002 [2024-07-15 12:56:00.951093] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:45.002 [2024-07-15 12:56:00.951312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.002 12:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:45.259 malloc0 00:13:45.259 12:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:45.516 12:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m8Wdz1ev8E 00:13:45.773 [2024-07-15 12:56:01.742592] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73706 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73706 /var/tmp/bdevperf.sock 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73706 ']' 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.773 12:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.773 [2024-07-15 12:56:01.809442] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:45.773 [2024-07-15 12:56:01.809523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73706 ] 00:13:46.034 [2024-07-15 12:56:01.947699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.034 [2024-07-15 12:56:02.065670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.300 [2024-07-15 12:56:02.120441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:46.865 12:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.865 12:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:46.865 12:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m8Wdz1ev8E 00:13:47.123 [2024-07-15 12:56:03.077597] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:47.123 [2024-07-15 12:56:03.077967] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:47.123 TLSTESTn1 00:13:47.123 12:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:47.686 12:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:47.686 "subsystems": [ 00:13:47.686 { 00:13:47.686 "subsystem": "keyring", 00:13:47.686 "config": [] 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "subsystem": "iobuf", 00:13:47.686 "config": [ 00:13:47.686 { 00:13:47.686 "method": "iobuf_set_options", 00:13:47.686 "params": { 00:13:47.686 "small_pool_count": 8192, 00:13:47.686 "large_pool_count": 1024, 00:13:47.686 "small_bufsize": 8192, 00:13:47.686 "large_bufsize": 135168 00:13:47.686 } 00:13:47.686 } 00:13:47.686 ] 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "subsystem": "sock", 00:13:47.686 "config": [ 00:13:47.686 { 00:13:47.686 "method": "sock_set_default_impl", 00:13:47.686 "params": { 00:13:47.686 "impl_name": "uring" 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "sock_impl_set_options", 00:13:47.686 "params": { 00:13:47.686 "impl_name": "ssl", 00:13:47.686 "recv_buf_size": 4096, 00:13:47.686 "send_buf_size": 4096, 00:13:47.686 "enable_recv_pipe": true, 00:13:47.686 "enable_quickack": false, 00:13:47.686 "enable_placement_id": 0, 00:13:47.686 "enable_zerocopy_send_server": true, 00:13:47.686 "enable_zerocopy_send_client": false, 00:13:47.686 "zerocopy_threshold": 0, 00:13:47.686 "tls_version": 0, 00:13:47.686 "enable_ktls": false 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "sock_impl_set_options", 00:13:47.686 "params": { 00:13:47.686 "impl_name": "posix", 00:13:47.686 "recv_buf_size": 2097152, 00:13:47.686 "send_buf_size": 2097152, 00:13:47.686 "enable_recv_pipe": true, 00:13:47.686 "enable_quickack": false, 00:13:47.686 "enable_placement_id": 0, 00:13:47.686 "enable_zerocopy_send_server": true, 00:13:47.686 "enable_zerocopy_send_client": false, 00:13:47.686 "zerocopy_threshold": 0, 00:13:47.686 "tls_version": 0, 00:13:47.686 "enable_ktls": false 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "sock_impl_set_options", 00:13:47.686 "params": { 00:13:47.686 "impl_name": "uring", 00:13:47.686 "recv_buf_size": 2097152, 00:13:47.686 "send_buf_size": 2097152, 00:13:47.686 "enable_recv_pipe": true, 00:13:47.686 "enable_quickack": false, 00:13:47.686 "enable_placement_id": 0, 00:13:47.686 "enable_zerocopy_send_server": false, 00:13:47.686 "enable_zerocopy_send_client": false, 00:13:47.686 "zerocopy_threshold": 0, 00:13:47.686 "tls_version": 0, 00:13:47.686 "enable_ktls": false 00:13:47.686 } 00:13:47.686 } 00:13:47.686 ] 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "subsystem": "vmd", 00:13:47.686 "config": [] 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "subsystem": "accel", 00:13:47.686 "config": [ 00:13:47.686 { 00:13:47.686 "method": "accel_set_options", 00:13:47.686 "params": { 00:13:47.686 "small_cache_size": 128, 00:13:47.686 "large_cache_size": 16, 00:13:47.686 "task_count": 2048, 00:13:47.686 "sequence_count": 2048, 00:13:47.686 "buf_count": 2048 00:13:47.686 } 00:13:47.686 } 00:13:47.686 ] 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "subsystem": "bdev", 00:13:47.686 "config": [ 00:13:47.686 { 00:13:47.686 "method": "bdev_set_options", 00:13:47.686 "params": { 00:13:47.686 "bdev_io_pool_size": 65535, 00:13:47.686 "bdev_io_cache_size": 256, 00:13:47.686 "bdev_auto_examine": true, 00:13:47.686 "iobuf_small_cache_size": 128, 00:13:47.686 "iobuf_large_cache_size": 16 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "bdev_raid_set_options", 00:13:47.686 "params": { 00:13:47.686 "process_window_size_kb": 1024 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "bdev_iscsi_set_options", 00:13:47.686 "params": { 00:13:47.686 "timeout_sec": 30 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "bdev_nvme_set_options", 00:13:47.686 "params": { 00:13:47.686 "action_on_timeout": "none", 00:13:47.686 "timeout_us": 0, 00:13:47.686 "timeout_admin_us": 0, 00:13:47.686 "keep_alive_timeout_ms": 10000, 00:13:47.686 "arbitration_burst": 0, 00:13:47.686 "low_priority_weight": 0, 00:13:47.686 "medium_priority_weight": 0, 00:13:47.686 "high_priority_weight": 0, 00:13:47.686 "nvme_adminq_poll_period_us": 10000, 00:13:47.686 "nvme_ioq_poll_period_us": 0, 00:13:47.686 "io_queue_requests": 0, 00:13:47.686 "delay_cmd_submit": true, 00:13:47.686 "transport_retry_count": 4, 00:13:47.686 "bdev_retry_count": 3, 00:13:47.686 "transport_ack_timeout": 0, 00:13:47.686 "ctrlr_loss_timeout_sec": 0, 00:13:47.686 "reconnect_delay_sec": 0, 00:13:47.686 "fast_io_fail_timeout_sec": 0, 00:13:47.686 "disable_auto_failback": false, 00:13:47.686 "generate_uuids": false, 00:13:47.686 "transport_tos": 0, 00:13:47.686 "nvme_error_stat": false, 00:13:47.686 "rdma_srq_size": 0, 00:13:47.686 "io_path_stat": false, 00:13:47.686 "allow_accel_sequence": false, 00:13:47.686 "rdma_max_cq_size": 0, 00:13:47.686 "rdma_cm_event_timeout_ms": 0, 00:13:47.686 "dhchap_digests": [ 00:13:47.686 "sha256", 00:13:47.686 "sha384", 00:13:47.686 "sha512" 00:13:47.686 ], 00:13:47.686 "dhchap_dhgroups": [ 00:13:47.686 "null", 00:13:47.686 "ffdhe2048", 00:13:47.686 "ffdhe3072", 00:13:47.686 "ffdhe4096", 00:13:47.686 "ffdhe6144", 00:13:47.686 "ffdhe8192" 00:13:47.686 ] 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "bdev_nvme_set_hotplug", 00:13:47.686 "params": { 00:13:47.686 "period_us": 100000, 00:13:47.686 "enable": false 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "bdev_malloc_create", 00:13:47.686 "params": { 00:13:47.686 "name": "malloc0", 00:13:47.686 "num_blocks": 8192, 00:13:47.686 "block_size": 4096, 00:13:47.686 "physical_block_size": 4096, 00:13:47.686 "uuid": "b9f37f33-db60-4219-ba1e-8cf54ee52fc9", 00:13:47.686 "optimal_io_boundary": 0 00:13:47.686 } 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "method": "bdev_wait_for_examine" 00:13:47.686 } 00:13:47.686 ] 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "subsystem": "nbd", 00:13:47.686 "config": [] 00:13:47.686 }, 00:13:47.686 { 00:13:47.686 "subsystem": "scheduler", 00:13:47.686 "config": [ 00:13:47.686 { 00:13:47.687 "method": "framework_set_scheduler", 00:13:47.687 "params": { 00:13:47.687 "name": "static" 00:13:47.687 } 00:13:47.687 } 00:13:47.687 ] 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "subsystem": "nvmf", 00:13:47.687 "config": [ 00:13:47.687 { 00:13:47.687 "method": "nvmf_set_config", 00:13:47.687 "params": { 00:13:47.687 "discovery_filter": "match_any", 00:13:47.687 "admin_cmd_passthru": { 00:13:47.687 "identify_ctrlr": false 00:13:47.687 } 00:13:47.687 } 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "method": "nvmf_set_max_subsystems", 00:13:47.687 "params": { 00:13:47.687 "max_subsystems": 1024 00:13:47.687 } 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "method": "nvmf_set_crdt", 00:13:47.687 "params": { 00:13:47.687 "crdt1": 0, 00:13:47.687 "crdt2": 0, 00:13:47.687 "crdt3": 0 00:13:47.687 } 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "method": "nvmf_create_transport", 00:13:47.687 "params": { 00:13:47.687 "trtype": "TCP", 00:13:47.687 "max_queue_depth": 128, 00:13:47.687 "max_io_qpairs_per_ctrlr": 127, 00:13:47.687 "in_capsule_data_size": 4096, 00:13:47.687 "max_io_size": 131072, 00:13:47.687 "io_unit_size": 131072, 00:13:47.687 "max_aq_depth": 128, 00:13:47.687 "num_shared_buffers": 511, 00:13:47.687 "buf_cache_size": 4294967295, 00:13:47.687 "dif_insert_or_strip": false, 00:13:47.687 "zcopy": false, 00:13:47.687 "c2h_success": false, 00:13:47.687 "sock_priority": 0, 00:13:47.687 "abort_timeout_sec": 1, 00:13:47.687 "ack_timeout": 0, 00:13:47.687 "data_wr_pool_size": 0 00:13:47.687 } 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "method": "nvmf_create_subsystem", 00:13:47.687 "params": { 00:13:47.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.687 "allow_any_host": false, 00:13:47.687 "serial_number": "SPDK00000000000001", 00:13:47.687 "model_number": "SPDK bdev Controller", 00:13:47.687 "max_namespaces": 10, 00:13:47.687 "min_cntlid": 1, 00:13:47.687 "max_cntlid": 65519, 00:13:47.687 "ana_reporting": false 00:13:47.687 } 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "method": "nvmf_subsystem_add_host", 00:13:47.687 "params": { 00:13:47.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.687 "host": "nqn.2016-06.io.spdk:host1", 00:13:47.687 "psk": "/tmp/tmp.m8Wdz1ev8E" 00:13:47.687 } 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "method": "nvmf_subsystem_add_ns", 00:13:47.687 "params": { 00:13:47.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.687 "namespace": { 00:13:47.687 "nsid": 1, 00:13:47.687 "bdev_name": "malloc0", 00:13:47.687 "nguid": "B9F37F33DB604219BA1E8CF54EE52FC9", 00:13:47.687 "uuid": "b9f37f33-db60-4219-ba1e-8cf54ee52fc9", 00:13:47.687 "no_auto_visible": false 00:13:47.687 } 00:13:47.687 } 00:13:47.687 }, 00:13:47.687 { 00:13:47.687 "method": "nvmf_subsystem_add_listener", 00:13:47.687 "params": { 00:13:47.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.687 "listen_address": { 00:13:47.687 "trtype": "TCP", 00:13:47.687 "adrfam": "IPv4", 00:13:47.687 "traddr": "10.0.0.2", 00:13:47.687 "trsvcid": "4420" 00:13:47.687 }, 00:13:47.687 "secure_channel": true 00:13:47.687 } 00:13:47.687 } 00:13:47.687 ] 00:13:47.687 } 00:13:47.687 ] 00:13:47.687 }' 00:13:47.687 12:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:47.945 12:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:47.945 "subsystems": [ 00:13:47.945 { 00:13:47.945 "subsystem": "keyring", 00:13:47.945 "config": [] 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "subsystem": "iobuf", 00:13:47.945 "config": [ 00:13:47.945 { 00:13:47.945 "method": "iobuf_set_options", 00:13:47.945 "params": { 00:13:47.945 "small_pool_count": 8192, 00:13:47.945 "large_pool_count": 1024, 00:13:47.945 "small_bufsize": 8192, 00:13:47.945 "large_bufsize": 135168 00:13:47.945 } 00:13:47.945 } 00:13:47.945 ] 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "subsystem": "sock", 00:13:47.945 "config": [ 00:13:47.945 { 00:13:47.945 "method": "sock_set_default_impl", 00:13:47.945 "params": { 00:13:47.945 "impl_name": "uring" 00:13:47.945 } 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "method": "sock_impl_set_options", 00:13:47.945 "params": { 00:13:47.945 "impl_name": "ssl", 00:13:47.945 "recv_buf_size": 4096, 00:13:47.945 "send_buf_size": 4096, 00:13:47.945 "enable_recv_pipe": true, 00:13:47.945 "enable_quickack": false, 00:13:47.945 "enable_placement_id": 0, 00:13:47.945 "enable_zerocopy_send_server": true, 00:13:47.945 "enable_zerocopy_send_client": false, 00:13:47.945 "zerocopy_threshold": 0, 00:13:47.945 "tls_version": 0, 00:13:47.945 "enable_ktls": false 00:13:47.945 } 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "method": "sock_impl_set_options", 00:13:47.945 "params": { 00:13:47.945 "impl_name": "posix", 00:13:47.945 "recv_buf_size": 2097152, 00:13:47.945 "send_buf_size": 2097152, 00:13:47.945 "enable_recv_pipe": true, 00:13:47.945 "enable_quickack": false, 00:13:47.945 "enable_placement_id": 0, 00:13:47.945 "enable_zerocopy_send_server": true, 00:13:47.945 "enable_zerocopy_send_client": false, 00:13:47.945 "zerocopy_threshold": 0, 00:13:47.945 "tls_version": 0, 00:13:47.945 "enable_ktls": false 00:13:47.945 } 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "method": "sock_impl_set_options", 00:13:47.945 "params": { 00:13:47.945 "impl_name": "uring", 00:13:47.945 "recv_buf_size": 2097152, 00:13:47.945 "send_buf_size": 2097152, 00:13:47.945 "enable_recv_pipe": true, 00:13:47.945 "enable_quickack": false, 00:13:47.945 "enable_placement_id": 0, 00:13:47.945 "enable_zerocopy_send_server": false, 00:13:47.945 "enable_zerocopy_send_client": false, 00:13:47.945 "zerocopy_threshold": 0, 00:13:47.945 "tls_version": 0, 00:13:47.945 "enable_ktls": false 00:13:47.945 } 00:13:47.945 } 00:13:47.945 ] 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "subsystem": "vmd", 00:13:47.945 "config": [] 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "subsystem": "accel", 00:13:47.945 "config": [ 00:13:47.945 { 00:13:47.945 "method": "accel_set_options", 00:13:47.945 "params": { 00:13:47.945 "small_cache_size": 128, 00:13:47.945 "large_cache_size": 16, 00:13:47.945 "task_count": 2048, 00:13:47.945 "sequence_count": 2048, 00:13:47.945 "buf_count": 2048 00:13:47.945 } 00:13:47.945 } 00:13:47.945 ] 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "subsystem": "bdev", 00:13:47.945 "config": [ 00:13:47.945 { 00:13:47.945 "method": "bdev_set_options", 00:13:47.945 "params": { 00:13:47.945 "bdev_io_pool_size": 65535, 00:13:47.945 "bdev_io_cache_size": 256, 00:13:47.945 "bdev_auto_examine": true, 00:13:47.945 "iobuf_small_cache_size": 128, 00:13:47.945 "iobuf_large_cache_size": 16 00:13:47.945 } 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "method": "bdev_raid_set_options", 00:13:47.945 "params": { 00:13:47.945 "process_window_size_kb": 1024 00:13:47.945 } 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "method": "bdev_iscsi_set_options", 00:13:47.945 "params": { 00:13:47.945 "timeout_sec": 30 00:13:47.945 } 00:13:47.945 }, 00:13:47.945 { 00:13:47.945 "method": "bdev_nvme_set_options", 00:13:47.945 "params": { 00:13:47.945 "action_on_timeout": "none", 00:13:47.945 "timeout_us": 0, 00:13:47.945 "timeout_admin_us": 0, 00:13:47.945 "keep_alive_timeout_ms": 10000, 00:13:47.945 "arbitration_burst": 0, 00:13:47.945 "low_priority_weight": 0, 00:13:47.945 "medium_priority_weight": 0, 00:13:47.945 "high_priority_weight": 0, 00:13:47.945 "nvme_adminq_poll_period_us": 10000, 00:13:47.945 "nvme_ioq_poll_period_us": 0, 00:13:47.945 "io_queue_requests": 512, 00:13:47.946 "delay_cmd_submit": true, 00:13:47.946 "transport_retry_count": 4, 00:13:47.946 "bdev_retry_count": 3, 00:13:47.946 "transport_ack_timeout": 0, 00:13:47.946 "ctrlr_loss_timeout_sec": 0, 00:13:47.946 "reconnect_delay_sec": 0, 00:13:47.946 "fast_io_fail_timeout_sec": 0, 00:13:47.946 "disable_auto_failback": false, 00:13:47.946 "generate_uuids": false, 00:13:47.946 "transport_tos": 0, 00:13:47.946 "nvme_error_stat": false, 00:13:47.946 "rdma_srq_size": 0, 00:13:47.946 "io_path_stat": false, 00:13:47.946 "allow_accel_sequence": false, 00:13:47.946 "rdma_max_cq_size": 0, 00:13:47.946 "rdma_cm_event_timeout_ms": 0, 00:13:47.946 "dhchap_digests": [ 00:13:47.946 "sha256", 00:13:47.946 "sha384", 00:13:47.946 "sha512" 00:13:47.946 ], 00:13:47.946 "dhchap_dhgroups": [ 00:13:47.946 "null", 00:13:47.946 "ffdhe2048", 00:13:47.946 "ffdhe3072", 00:13:47.946 "ffdhe4096", 00:13:47.946 "ffdhe6144", 00:13:47.946 "ffdhe8192" 00:13:47.946 ] 00:13:47.946 } 00:13:47.946 }, 00:13:47.946 { 00:13:47.946 "method": "bdev_nvme_attach_controller", 00:13:47.946 "params": { 00:13:47.946 "name": "TLSTEST", 00:13:47.946 "trtype": "TCP", 00:13:47.946 "adrfam": "IPv4", 00:13:47.946 "traddr": "10.0.0.2", 00:13:47.946 "trsvcid": "4420", 00:13:47.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.946 "prchk_reftag": false, 00:13:47.946 "prchk_guard": false, 00:13:47.946 "ctrlr_loss_timeout_sec": 0, 00:13:47.946 "reconnect_delay_sec": 0, 00:13:47.946 "fast_io_fail_timeout_sec": 0, 00:13:47.946 "psk": "/tmp/tmp.m8Wdz1ev8E", 00:13:47.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.946 "hdgst": false, 00:13:47.946 "ddgst": false 00:13:47.946 } 00:13:47.946 }, 00:13:47.946 { 00:13:47.946 "method": "bdev_nvme_set_hotplug", 00:13:47.946 "params": { 00:13:47.946 "period_us": 100000, 00:13:47.946 "enable": false 00:13:47.946 } 00:13:47.946 }, 00:13:47.946 { 00:13:47.946 "method": "bdev_wait_for_examine" 00:13:47.946 } 00:13:47.946 ] 00:13:47.946 }, 00:13:47.946 { 00:13:47.946 "subsystem": "nbd", 00:13:47.946 "config": [] 00:13:47.946 } 00:13:47.946 ] 00:13:47.946 }' 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73706 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73706 ']' 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73706 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73706 00:13:47.946 killing process with pid 73706 00:13:47.946 Received shutdown signal, test time was about 10.000000 seconds 00:13:47.946 00:13:47.946 Latency(us) 00:13:47.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.946 =================================================================================================================== 00:13:47.946 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73706' 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73706 00:13:47.946 [2024-07-15 12:56:03.879564] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:47.946 12:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73706 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73652 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73652 ']' 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73652 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73652 00:13:48.204 killing process with pid 73652 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73652' 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73652 00:13:48.204 [2024-07-15 12:56:04.117544] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:48.204 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73652 00:13:48.462 12:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:48.462 12:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:48.462 "subsystems": [ 00:13:48.462 { 00:13:48.462 "subsystem": "keyring", 00:13:48.462 "config": [] 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "subsystem": "iobuf", 00:13:48.462 "config": [ 00:13:48.462 { 00:13:48.462 "method": "iobuf_set_options", 00:13:48.462 "params": { 00:13:48.462 "small_pool_count": 8192, 00:13:48.462 "large_pool_count": 1024, 00:13:48.462 "small_bufsize": 8192, 00:13:48.462 "large_bufsize": 135168 00:13:48.462 } 00:13:48.462 } 00:13:48.462 ] 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "subsystem": "sock", 00:13:48.462 "config": [ 00:13:48.462 { 00:13:48.462 "method": "sock_set_default_impl", 00:13:48.462 "params": { 00:13:48.462 "impl_name": "uring" 00:13:48.462 } 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "method": "sock_impl_set_options", 00:13:48.462 "params": { 00:13:48.462 "impl_name": "ssl", 00:13:48.462 "recv_buf_size": 4096, 00:13:48.462 "send_buf_size": 4096, 00:13:48.462 "enable_recv_pipe": true, 00:13:48.462 "enable_quickack": false, 00:13:48.462 "enable_placement_id": 0, 00:13:48.462 "enable_zerocopy_send_server": true, 00:13:48.462 "enable_zerocopy_send_client": false, 00:13:48.462 "zerocopy_threshold": 0, 00:13:48.462 "tls_version": 0, 00:13:48.462 "enable_ktls": false 00:13:48.462 } 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "method": "sock_impl_set_options", 00:13:48.462 "params": { 00:13:48.462 "impl_name": "posix", 00:13:48.462 "recv_buf_size": 2097152, 00:13:48.462 "send_buf_size": 2097152, 00:13:48.462 "enable_recv_pipe": true, 00:13:48.462 "enable_quickack": false, 00:13:48.462 "enable_placement_id": 0, 00:13:48.462 "enable_zerocopy_send_server": true, 00:13:48.462 "enable_zerocopy_send_client": false, 00:13:48.462 "zerocopy_threshold": 0, 00:13:48.462 "tls_version": 0, 00:13:48.462 "enable_ktls": false 00:13:48.462 } 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "method": "sock_impl_set_options", 00:13:48.462 "params": { 00:13:48.462 "impl_name": "uring", 00:13:48.462 "recv_buf_size": 2097152, 00:13:48.462 "send_buf_size": 2097152, 00:13:48.462 "enable_recv_pipe": true, 00:13:48.462 "enable_quickack": false, 00:13:48.462 "enable_placement_id": 0, 00:13:48.462 "enable_zerocopy_send_server": false, 00:13:48.462 "enable_zerocopy_send_client": false, 00:13:48.462 "zerocopy_threshold": 0, 00:13:48.462 "tls_version": 0, 00:13:48.462 "enable_ktls": false 00:13:48.462 } 00:13:48.462 } 00:13:48.462 ] 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "subsystem": "vmd", 00:13:48.462 "config": [] 00:13:48.462 }, 00:13:48.462 { 00:13:48.462 "subsystem": "accel", 00:13:48.462 "config": [ 00:13:48.462 { 00:13:48.462 "method": "accel_set_options", 00:13:48.462 "params": { 00:13:48.462 "small_cache_size": 128, 00:13:48.462 "large_cache_size": 16, 00:13:48.462 "task_count": 2048, 00:13:48.462 "sequence_count": 2048, 00:13:48.462 "buf_count": 2048 00:13:48.463 } 00:13:48.463 } 00:13:48.463 ] 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "subsystem": "bdev", 00:13:48.463 "config": [ 00:13:48.463 { 00:13:48.463 "method": "bdev_set_options", 00:13:48.463 "params": { 00:13:48.463 "bdev_io_pool_size": 65535, 00:13:48.463 "bdev_io_cache_size": 256, 00:13:48.463 "bdev_auto_examine": true, 00:13:48.463 "iobuf_small_cache_size": 128, 00:13:48.463 "iobuf_large_cache_size": 16 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "bdev_raid_set_options", 00:13:48.463 "params": { 00:13:48.463 "process_window_size_kb": 1024 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "bdev_iscsi_set_options", 00:13:48.463 "params": { 00:13:48.463 "timeout_sec": 30 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "bdev_nvme_set_options", 00:13:48.463 "params": { 00:13:48.463 "action_on_timeout": "none", 00:13:48.463 "timeout_us": 0, 00:13:48.463 "timeout_admin_us": 0, 00:13:48.463 "keep_alive_timeout_ms": 10000, 00:13:48.463 "arbitration_burst": 0, 00:13:48.463 "low_priority_weight": 0, 00:13:48.463 "medium_priority_weight": 0, 00:13:48.463 "high_priority_weight": 0, 00:13:48.463 "nvme_adminq_poll_period_us": 10000, 00:13:48.463 "nvme_ioq_poll_period_us": 0, 00:13:48.463 "io_queue_requests": 0, 00:13:48.463 "delay_cmd_submit": true, 00:13:48.463 "transport_retry_count": 4, 00:13:48.463 "bdev_retry_count": 3, 00:13:48.463 "transport_ack_timeout": 0, 00:13:48.463 "ctrlr_loss_timeout_sec": 0, 00:13:48.463 "reconnect_delay_sec": 0, 00:13:48.463 "fast_io_fail_timeout_sec": 0, 00:13:48.463 "disable_auto_failback": false, 00:13:48.463 "generate_uuids": false, 00:13:48.463 "transport_tos": 0, 00:13:48.463 "nvme_error_stat": false, 00:13:48.463 "rdma_srq_size": 0, 00:13:48.463 "io_path_stat": false, 00:13:48.463 "allow_accel_sequence": false, 00:13:48.463 "rdma_max_cq_size": 0, 00:13:48.463 "rdma_cm_event_timeout_ms": 0, 00:13:48.463 "dhchap_digests": [ 00:13:48.463 "sha256", 00:13:48.463 "sha384", 00:13:48.463 "sha512" 00:13:48.463 ], 00:13:48.463 "dhchap_dhgroups": [ 00:13:48.463 "null", 00:13:48.463 "ffdhe2048", 00:13:48.463 "ffdhe3072", 00:13:48.463 "ffdhe4096", 00:13:48.463 "ffdhe6144", 00:13:48.463 "ffdhe8192" 00:13:48.463 ] 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "bdev_nvme_set_hotplug", 00:13:48.463 "params": { 00:13:48.463 "period_us": 100000, 00:13:48.463 "enable": false 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "bdev_malloc_create", 00:13:48.463 "params": { 00:13:48.463 "name": "malloc0", 00:13:48.463 "num_blocks": 8192, 00:13:48.463 "block_size": 4096, 00:13:48.463 "physical_block_size": 4096, 00:13:48.463 "uuid": "b9f37f33-db60-4219-ba1e-8cf54ee52fc9", 00:13:48.463 "optimal_io_boundary": 0 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "bdev_wait_for_examine" 00:13:48.463 } 00:13:48.463 ] 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "subsystem": "nbd", 00:13:48.463 "config": [] 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "subsystem": "scheduler", 00:13:48.463 "config": [ 00:13:48.463 { 00:13:48.463 "method": "framework_set_scheduler", 00:13:48.463 "params": { 00:13:48.463 "name": "static" 00:13:48.463 } 00:13:48.463 } 00:13:48.463 ] 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "subsystem": "nvmf", 00:13:48.463 "config": [ 00:13:48.463 { 00:13:48.463 "method": "nvmf_set_config", 00:13:48.463 "params": { 00:13:48.463 "discovery_filter": "match_any", 00:13:48.463 "admin_cmd_passthru": { 00:13:48.463 "identify_ctrlr": false 00:13:48.463 } 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "nvmf_set_max_subsystems", 00:13:48.463 "params": { 00:13:48.463 "max_subsystems": 1024 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "nvmf_set_crdt", 00:13:48.463 "params": { 00:13:48.463 "crdt1": 0, 00:13:48.463 "crdt2": 0, 00:13:48.463 "crdt3": 0 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "nvmf_create_transport", 00:13:48.463 "params": { 00:13:48.463 "trtype": "TCP", 00:13:48.463 "max_queue_depth": 128, 00:13:48.463 "max_io_qpairs_per_ctrlr": 127, 00:13:48.463 "in_capsule_data_size": 4096, 00:13:48.463 "max_io_size": 131072, 00:13:48.463 "io_unit_size": 131072, 00:13:48.463 "max_aq_depth": 128, 00:13:48.463 "num_shared_buffers": 511, 00:13:48.463 "buf_cache_size": 4294967295, 00:13:48.463 "dif_insert_or_strip": false, 00:13:48.463 "zcopy": false, 00:13:48.463 "c2h_success": false, 00:13:48.463 "sock_priority": 0, 00:13:48.463 "abort_timeout_sec": 1, 00:13:48.463 "ack_timeout": 0, 00:13:48.463 "data_wr_pool_size": 0 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "nvmf_create_subsystem", 00:13:48.463 "params": { 00:13:48.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.463 "allow_any_host": false, 00:13:48.463 "serial_number": "SPDK00000000000001", 00:13:48.463 "model_number": "SPDK bdev Controller", 00:13:48.463 "max_namespaces": 10, 00:13:48.463 "min_cntlid": 1, 00:13:48.463 "max_cntlid": 65519, 00:13:48.463 "ana_reporting": false 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "nvmf_subsystem_add_host", 00:13:48.463 "params": { 00:13:48.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.463 "host": "nqn.2016-06.io.spdk:host1", 00:13:48.463 "psk": "/tmp/tmp.m8Wdz1ev8E" 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "nvmf_subsystem_add_ns", 00:13:48.463 "params": { 00:13:48.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.463 "namespace": { 00:13:48.463 "nsid": 1, 00:13:48.463 "bdev_name": "malloc0", 00:13:48.463 "nguid": "B9F37F33DB604219BA1E8CF54EE52FC9", 00:13:48.463 "uuid": "b9f37f33-db60-4219-ba1e-8cf54ee52fc9", 00:13:48.463 "no_auto_visible": false 00:13:48.463 } 00:13:48.463 } 00:13:48.463 }, 00:13:48.463 { 00:13:48.463 "method": "nvmf_subsystem_add_listener", 00:13:48.463 "params": { 00:13:48.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.463 "listen_address": { 00:13:48.463 "trtype": "TCP", 00:13:48.463 "adrfam": "IPv4", 00:13:48.463 "traddr": "10.0.0.2", 00:13:48.463 "trsvcid": "4420" 00:13:48.464 }, 00:13:48.464 "secure_channel": true 00:13:48.464 } 00:13:48.464 } 00:13:48.464 ] 00:13:48.464 } 00:13:48.464 ] 00:13:48.464 }' 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73751 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73751 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73751 ']' 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.464 12:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.464 [2024-07-15 12:56:04.418542] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:48.464 [2024-07-15 12:56:04.418647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.721 [2024-07-15 12:56:04.558961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.721 [2024-07-15 12:56:04.661199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.721 [2024-07-15 12:56:04.661267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.721 [2024-07-15 12:56:04.661278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.721 [2024-07-15 12:56:04.661286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.721 [2024-07-15 12:56:04.661293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.721 [2024-07-15 12:56:04.661414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.008 [2024-07-15 12:56:04.831499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:49.008 [2024-07-15 12:56:04.900418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.008 [2024-07-15 12:56:04.916321] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:49.008 [2024-07-15 12:56:04.932362] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:49.008 [2024-07-15 12:56:04.932654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73782 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73782 /var/tmp/bdevperf.sock 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73782 ']' 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:49.600 12:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:49.600 "subsystems": [ 00:13:49.600 { 00:13:49.600 "subsystem": "keyring", 00:13:49.600 "config": [] 00:13:49.600 }, 00:13:49.600 { 00:13:49.600 "subsystem": "iobuf", 00:13:49.600 "config": [ 00:13:49.600 { 00:13:49.600 "method": "iobuf_set_options", 00:13:49.600 "params": { 00:13:49.600 "small_pool_count": 8192, 00:13:49.600 "large_pool_count": 1024, 00:13:49.600 "small_bufsize": 8192, 00:13:49.600 "large_bufsize": 135168 00:13:49.600 } 00:13:49.600 } 00:13:49.600 ] 00:13:49.600 }, 00:13:49.600 { 00:13:49.600 "subsystem": "sock", 00:13:49.600 "config": [ 00:13:49.600 { 00:13:49.600 "method": "sock_set_default_impl", 00:13:49.600 "params": { 00:13:49.600 "impl_name": "uring" 00:13:49.600 } 00:13:49.600 }, 00:13:49.600 { 00:13:49.600 "method": "sock_impl_set_options", 00:13:49.600 "params": { 00:13:49.600 "impl_name": "ssl", 00:13:49.600 "recv_buf_size": 4096, 00:13:49.600 "send_buf_size": 4096, 00:13:49.600 "enable_recv_pipe": true, 00:13:49.600 "enable_quickack": false, 00:13:49.600 "enable_placement_id": 0, 00:13:49.600 "enable_zerocopy_send_server": true, 00:13:49.600 "enable_zerocopy_send_client": false, 00:13:49.600 "zerocopy_threshold": 0, 00:13:49.600 "tls_version": 0, 00:13:49.600 "enable_ktls": false 00:13:49.600 } 00:13:49.600 }, 00:13:49.601 { 00:13:49.601 "method": "sock_impl_set_options", 00:13:49.601 "params": { 00:13:49.601 "impl_name": "posix", 00:13:49.601 "recv_buf_size": 2097152, 00:13:49.601 "send_buf_size": 2097152, 00:13:49.601 "enable_recv_pipe": true, 00:13:49.601 "enable_quickack": false, 00:13:49.601 "enable_placement_id": 0, 00:13:49.601 "enable_zerocopy_send_server": true, 00:13:49.601 "enable_zerocopy_send_client": false, 00:13:49.601 "zerocopy_threshold": 0, 00:13:49.601 "tls_version": 0, 00:13:49.601 "enable_ktls": false 00:13:49.601 } 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "method": "sock_impl_set_options", 00:13:49.601 "params": { 00:13:49.601 "impl_name": "uring", 00:13:49.601 "recv_buf_size": 2097152, 00:13:49.601 "send_buf_size": 2097152, 00:13:49.601 "enable_recv_pipe": true, 00:13:49.601 "enable_quickack": false, 00:13:49.601 "enable_placement_id": 0, 00:13:49.601 "enable_zerocopy_send_server": false, 00:13:49.601 "enable_zerocopy_send_client": false, 00:13:49.601 "zerocopy_threshold": 0, 00:13:49.601 "tls_version": 0, 00:13:49.601 "enable_ktls": false 00:13:49.601 } 00:13:49.601 } 00:13:49.601 ] 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "subsystem": "vmd", 00:13:49.601 "config": [] 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "subsystem": "accel", 00:13:49.601 "config": [ 00:13:49.601 { 00:13:49.601 "method": "accel_set_options", 00:13:49.601 "params": { 00:13:49.601 "small_cache_size": 128, 00:13:49.601 "large_cache_size": 16, 00:13:49.601 "task_count": 2048, 00:13:49.601 "sequence_count": 2048, 00:13:49.601 "buf_count": 2048 00:13:49.601 } 00:13:49.601 } 00:13:49.601 ] 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "subsystem": "bdev", 00:13:49.601 "config": [ 00:13:49.601 { 00:13:49.601 "method": "bdev_set_options", 00:13:49.601 "params": { 00:13:49.601 "bdev_io_pool_size": 65535, 00:13:49.601 "bdev_io_cache_size": 256, 00:13:49.601 "bdev_auto_examine": true, 00:13:49.601 "iobuf_small_cache_size": 128, 00:13:49.601 "iobuf_large_cache_size": 16 00:13:49.601 } 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "method": "bdev_raid_set_options", 00:13:49.601 "params": { 00:13:49.601 "process_window_size_kb": 1024 00:13:49.601 } 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "method": "bdev_iscsi_set_options", 00:13:49.601 "params": { 00:13:49.601 "timeout_sec": 30 00:13:49.601 } 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "method": "bdev_nvme_set_options", 00:13:49.601 "params": { 00:13:49.601 "action_on_timeout": "none", 00:13:49.601 "timeout_us": 0, 00:13:49.601 "timeout_admin_us": 0, 00:13:49.601 "keep_alive_timeout_ms": 10000, 00:13:49.601 "arbitration_burst": 0, 00:13:49.601 "low_priority_weight": 0, 00:13:49.601 "medium_priority_weight": 0, 00:13:49.601 "high_priority_weight": 0, 00:13:49.601 "nvme_adminq_poll_period_us": 10000, 00:13:49.601 "nvme_ioq_poll_period_us": 0, 00:13:49.601 "io_queue_requests": 512, 00:13:49.601 "delay_cmd_submit": true, 00:13:49.601 "transport_retry_count": 4, 00:13:49.601 "bdev_retry_count": 3, 00:13:49.601 "transport_ack_timeout": 0, 00:13:49.601 "ctrlr_loss_timeout_sec": 0, 00:13:49.601 "reconnect_delay_sec": 0, 00:13:49.601 "fast_io_fail_timeout_sec": 0, 00:13:49.601 "disable_auto_failback": false, 00:13:49.601 "generate_uuids": false, 00:13:49.601 "transport_tos": 0, 00:13:49.601 "nvme_error_stat": false, 00:13:49.601 "rdma_srq_size": 0, 00:13:49.601 "io_path_stat": false, 00:13:49.601 "allow_accel_sequence": false, 00:13:49.601 "rdma_max_cq_size": 0, 00:13:49.601 "rdma_cm_event_timeout_ms": 0, 00:13:49.601 "dhchap_digests": [ 00:13:49.601 "sha256", 00:13:49.601 "sha384", 00:13:49.601 "sha512" 00:13:49.601 ], 00:13:49.601 "dhchap_dhgroups": [ 00:13:49.601 "null", 00:13:49.601 "ffdhe2048", 00:13:49.601 "ffdhe3072", 00:13:49.601 "ffdhe4096", 00:13:49.601 "ffdhe6144", 00:13:49.601 "ffdhe8192" 00:13:49.601 ] 00:13:49.601 } 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "method": "bdev_nvme_attach_controller", 00:13:49.601 "params": { 00:13:49.601 "name": "TLSTEST", 00:13:49.601 "trtype": "TCP", 00:13:49.601 "adrfam": "IPv4", 00:13:49.601 "traddr": "10.0.0.2", 00:13:49.601 "trsvcid": "4420", 00:13:49.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.601 "prchk_reftag": false, 00:13:49.601 "prchk_guard": false, 00:13:49.601 "ctrlr_loss_timeout_sec": 0, 00:13:49.601 "reconnect_delay_sec": 0, 00:13:49.601 "fast_io_fail_timeout_sec": 0, 00:13:49.601 "psk": "/tmp/tmp.m8Wdz1ev8E", 00:13:49.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:49.601 "hdgst": false, 00:13:49.601 "ddgst": false 00:13:49.601 } 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "method": "bdev_nvme_set_hotplug", 00:13:49.601 "params": { 00:13:49.601 "period_us": 100000, 00:13:49.601 "enable": false 00:13:49.601 } 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "method": "bdev_wait_for_examine" 00:13:49.601 } 00:13:49.601 ] 00:13:49.601 }, 00:13:49.601 { 00:13:49.601 "subsystem": "nbd", 00:13:49.601 "config": [] 00:13:49.601 } 00:13:49.601 ] 00:13:49.601 }' 00:13:49.601 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.601 12:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.601 [2024-07-15 12:56:05.447376] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:49.601 [2024-07-15 12:56:05.447609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73782 ] 00:13:49.601 [2024-07-15 12:56:05.579599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.860 [2024-07-15 12:56:05.691847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.860 [2024-07-15 12:56:05.826114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:49.860 [2024-07-15 12:56:05.865828] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:49.860 [2024-07-15 12:56:05.866332] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:50.427 12:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.427 12:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:50.427 12:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:50.685 Running I/O for 10 seconds... 00:14:00.652 00:14:00.652 Latency(us) 00:14:00.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.652 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:00.652 Verification LBA range: start 0x0 length 0x2000 00:14:00.652 TLSTESTn1 : 10.02 4042.32 15.79 0.00 0.00 31603.46 7149.38 33125.47 00:14:00.652 =================================================================================================================== 00:14:00.652 Total : 4042.32 15.79 0.00 0.00 31603.46 7149.38 33125.47 00:14:00.652 0 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73782 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73782 ']' 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73782 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73782 00:14:00.652 killing process with pid 73782 00:14:00.652 Received shutdown signal, test time was about 10.000000 seconds 00:14:00.652 00:14:00.652 Latency(us) 00:14:00.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.652 =================================================================================================================== 00:14:00.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73782' 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73782 00:14:00.652 [2024-07-15 12:56:16.598078] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:00.652 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73782 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73751 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73751 ']' 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73751 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73751 00:14:00.911 killing process with pid 73751 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73751' 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73751 00:14:00.911 [2024-07-15 12:56:16.838135] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:00.911 12:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73751 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73924 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73924 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73924 ']' 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.169 12:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.169 [2024-07-15 12:56:17.149802] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:01.169 [2024-07-15 12:56:17.150441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.428 [2024-07-15 12:56:17.295662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.428 [2024-07-15 12:56:17.412188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.428 [2024-07-15 12:56:17.412534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.428 [2024-07-15 12:56:17.412722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.428 [2024-07-15 12:56:17.412887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.428 [2024-07-15 12:56:17.412930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.428 [2024-07-15 12:56:17.413078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.428 [2024-07-15 12:56:17.468033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.m8Wdz1ev8E 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m8Wdz1ev8E 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:02.363 [2024-07-15 12:56:18.362330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.363 12:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:02.621 12:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:02.879 [2024-07-15 12:56:18.882453] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:02.879 [2024-07-15 12:56:18.882712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.879 12:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:03.137 malloc0 00:14:03.138 12:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:03.409 12:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m8Wdz1ev8E 00:14:03.666 [2024-07-15 12:56:19.698667] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:03.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73975 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73975 /var/tmp/bdevperf.sock 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73975 ']' 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.666 12:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.925 [2024-07-15 12:56:19.762832] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:03.925 [2024-07-15 12:56:19.763062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73975 ] 00:14:03.925 [2024-07-15 12:56:19.894531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.222 [2024-07-15 12:56:19.994606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.222 [2024-07-15 12:56:20.047071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:04.800 12:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.800 12:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:04.800 12:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.m8Wdz1ev8E 00:14:05.059 12:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:05.318 [2024-07-15 12:56:21.259623] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.318 nvme0n1 00:14:05.318 12:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:05.576 Running I/O for 1 seconds... 00:14:06.510 00:14:06.510 Latency(us) 00:14:06.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.510 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:06.510 Verification LBA range: start 0x0 length 0x2000 00:14:06.510 nvme0n1 : 1.03 3981.93 15.55 0.00 0.00 31794.73 7506.85 20137.43 00:14:06.510 =================================================================================================================== 00:14:06.510 Total : 3981.93 15.55 0.00 0.00 31794.73 7506.85 20137.43 00:14:06.510 0 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73975 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73975 ']' 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73975 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73975 00:14:06.510 killing process with pid 73975 00:14:06.510 Received shutdown signal, test time was about 1.000000 seconds 00:14:06.510 00:14:06.510 Latency(us) 00:14:06.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.510 =================================================================================================================== 00:14:06.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73975' 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73975 00:14:06.510 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73975 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73924 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73924 ']' 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73924 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73924 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73924' 00:14:06.769 killing process with pid 73924 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73924 00:14:06.769 [2024-07-15 12:56:22.758041] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:06.769 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73924 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74026 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74026 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74026 ']' 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.028 12:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.028 [2024-07-15 12:56:23.041585] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:07.028 [2024-07-15 12:56:23.041680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.287 [2024-07-15 12:56:23.180831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.287 [2024-07-15 12:56:23.292737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.287 [2024-07-15 12:56:23.292804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.287 [2024-07-15 12:56:23.292819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.287 [2024-07-15 12:56:23.292829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.287 [2024-07-15 12:56:23.292839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.287 [2024-07-15 12:56:23.292877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.545 [2024-07-15 12:56:23.348751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.112 12:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.112 12:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:08.112 12:56:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.112 12:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.112 12:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.112 [2024-07-15 12:56:24.005947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.112 malloc0 00:14:08.112 [2024-07-15 12:56:24.037077] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:08.112 [2024-07-15 12:56:24.037276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74058 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74058 /var/tmp/bdevperf.sock 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74058 ']' 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.112 12:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.112 [2024-07-15 12:56:24.131600] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:08.112 [2024-07-15 12:56:24.131964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74058 ] 00:14:08.371 [2024-07-15 12:56:24.281507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.371 [2024-07-15 12:56:24.402281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.630 [2024-07-15 12:56:24.457793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.200 12:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.200 12:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:09.200 12:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.m8Wdz1ev8E 00:14:09.458 12:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:09.718 [2024-07-15 12:56:25.660188] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:09.718 nvme0n1 00:14:09.718 12:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:09.977 Running I/O for 1 seconds... 00:14:10.914 00:14:10.914 Latency(us) 00:14:10.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.914 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:10.914 Verification LBA range: start 0x0 length 0x2000 00:14:10.914 nvme0n1 : 1.03 3732.83 14.58 0.00 0.00 33892.18 7238.75 19899.11 00:14:10.914 =================================================================================================================== 00:14:10.914 Total : 3732.83 14.58 0.00 0.00 33892.18 7238.75 19899.11 00:14:10.914 0 00:14:10.914 12:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:10.914 12:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.914 12:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.174 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.174 12:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:11.174 "subsystems": [ 00:14:11.174 { 00:14:11.174 "subsystem": "keyring", 00:14:11.174 "config": [ 00:14:11.174 { 00:14:11.174 "method": "keyring_file_add_key", 00:14:11.174 "params": { 00:14:11.174 "name": "key0", 00:14:11.174 "path": "/tmp/tmp.m8Wdz1ev8E" 00:14:11.174 } 00:14:11.174 } 00:14:11.174 ] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "iobuf", 00:14:11.174 "config": [ 00:14:11.174 { 00:14:11.174 "method": "iobuf_set_options", 00:14:11.174 "params": { 00:14:11.174 "small_pool_count": 8192, 00:14:11.174 "large_pool_count": 1024, 00:14:11.174 "small_bufsize": 8192, 00:14:11.174 "large_bufsize": 135168 00:14:11.174 } 00:14:11.174 } 00:14:11.174 ] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "sock", 00:14:11.174 "config": [ 00:14:11.174 { 00:14:11.174 "method": "sock_set_default_impl", 00:14:11.174 "params": { 00:14:11.174 "impl_name": "uring" 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "sock_impl_set_options", 00:14:11.174 "params": { 00:14:11.174 "impl_name": "ssl", 00:14:11.174 "recv_buf_size": 4096, 00:14:11.174 "send_buf_size": 4096, 00:14:11.174 "enable_recv_pipe": true, 00:14:11.174 "enable_quickack": false, 00:14:11.174 "enable_placement_id": 0, 00:14:11.174 "enable_zerocopy_send_server": true, 00:14:11.174 "enable_zerocopy_send_client": false, 00:14:11.174 "zerocopy_threshold": 0, 00:14:11.174 "tls_version": 0, 00:14:11.174 "enable_ktls": false 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "sock_impl_set_options", 00:14:11.174 "params": { 00:14:11.174 "impl_name": "posix", 00:14:11.174 "recv_buf_size": 2097152, 00:14:11.174 "send_buf_size": 2097152, 00:14:11.174 "enable_recv_pipe": true, 00:14:11.174 "enable_quickack": false, 00:14:11.174 "enable_placement_id": 0, 00:14:11.174 "enable_zerocopy_send_server": true, 00:14:11.174 "enable_zerocopy_send_client": false, 00:14:11.174 "zerocopy_threshold": 0, 00:14:11.174 "tls_version": 0, 00:14:11.174 "enable_ktls": false 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "sock_impl_set_options", 00:14:11.174 "params": { 00:14:11.174 "impl_name": "uring", 00:14:11.174 "recv_buf_size": 2097152, 00:14:11.174 "send_buf_size": 2097152, 00:14:11.174 "enable_recv_pipe": true, 00:14:11.174 "enable_quickack": false, 00:14:11.174 "enable_placement_id": 0, 00:14:11.174 "enable_zerocopy_send_server": false, 00:14:11.174 "enable_zerocopy_send_client": false, 00:14:11.174 "zerocopy_threshold": 0, 00:14:11.174 "tls_version": 0, 00:14:11.174 "enable_ktls": false 00:14:11.174 } 00:14:11.174 } 00:14:11.174 ] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "vmd", 00:14:11.174 "config": [] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "accel", 00:14:11.174 "config": [ 00:14:11.174 { 00:14:11.174 "method": "accel_set_options", 00:14:11.174 "params": { 00:14:11.174 "small_cache_size": 128, 00:14:11.174 "large_cache_size": 16, 00:14:11.174 "task_count": 2048, 00:14:11.174 "sequence_count": 2048, 00:14:11.174 "buf_count": 2048 00:14:11.174 } 00:14:11.174 } 00:14:11.174 ] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "bdev", 00:14:11.174 "config": [ 00:14:11.174 { 00:14:11.174 "method": "bdev_set_options", 00:14:11.174 "params": { 00:14:11.174 "bdev_io_pool_size": 65535, 00:14:11.174 "bdev_io_cache_size": 256, 00:14:11.174 "bdev_auto_examine": true, 00:14:11.174 "iobuf_small_cache_size": 128, 00:14:11.174 "iobuf_large_cache_size": 16 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "bdev_raid_set_options", 00:14:11.174 "params": { 00:14:11.174 "process_window_size_kb": 1024 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "bdev_iscsi_set_options", 00:14:11.174 "params": { 00:14:11.174 "timeout_sec": 30 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "bdev_nvme_set_options", 00:14:11.174 "params": { 00:14:11.174 "action_on_timeout": "none", 00:14:11.174 "timeout_us": 0, 00:14:11.174 "timeout_admin_us": 0, 00:14:11.174 "keep_alive_timeout_ms": 10000, 00:14:11.174 "arbitration_burst": 0, 00:14:11.174 "low_priority_weight": 0, 00:14:11.174 "medium_priority_weight": 0, 00:14:11.174 "high_priority_weight": 0, 00:14:11.174 "nvme_adminq_poll_period_us": 10000, 00:14:11.174 "nvme_ioq_poll_period_us": 0, 00:14:11.174 "io_queue_requests": 0, 00:14:11.174 "delay_cmd_submit": true, 00:14:11.174 "transport_retry_count": 4, 00:14:11.174 "bdev_retry_count": 3, 00:14:11.174 "transport_ack_timeout": 0, 00:14:11.174 "ctrlr_loss_timeout_sec": 0, 00:14:11.174 "reconnect_delay_sec": 0, 00:14:11.174 "fast_io_fail_timeout_sec": 0, 00:14:11.174 "disable_auto_failback": false, 00:14:11.174 "generate_uuids": false, 00:14:11.174 "transport_tos": 0, 00:14:11.174 "nvme_error_stat": false, 00:14:11.174 "rdma_srq_size": 0, 00:14:11.174 "io_path_stat": false, 00:14:11.174 "allow_accel_sequence": false, 00:14:11.174 "rdma_max_cq_size": 0, 00:14:11.174 "rdma_cm_event_timeout_ms": 0, 00:14:11.174 "dhchap_digests": [ 00:14:11.174 "sha256", 00:14:11.174 "sha384", 00:14:11.174 "sha512" 00:14:11.174 ], 00:14:11.174 "dhchap_dhgroups": [ 00:14:11.174 "null", 00:14:11.174 "ffdhe2048", 00:14:11.174 "ffdhe3072", 00:14:11.174 "ffdhe4096", 00:14:11.174 "ffdhe6144", 00:14:11.174 "ffdhe8192" 00:14:11.174 ] 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "bdev_nvme_set_hotplug", 00:14:11.174 "params": { 00:14:11.174 "period_us": 100000, 00:14:11.174 "enable": false 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "bdev_malloc_create", 00:14:11.174 "params": { 00:14:11.174 "name": "malloc0", 00:14:11.174 "num_blocks": 8192, 00:14:11.174 "block_size": 4096, 00:14:11.174 "physical_block_size": 4096, 00:14:11.174 "uuid": "32a79dcf-008d-4ad1-9c69-a76f167403b4", 00:14:11.174 "optimal_io_boundary": 0 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "bdev_wait_for_examine" 00:14:11.174 } 00:14:11.174 ] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "nbd", 00:14:11.174 "config": [] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "scheduler", 00:14:11.174 "config": [ 00:14:11.174 { 00:14:11.174 "method": "framework_set_scheduler", 00:14:11.174 "params": { 00:14:11.174 "name": "static" 00:14:11.174 } 00:14:11.174 } 00:14:11.174 ] 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "subsystem": "nvmf", 00:14:11.174 "config": [ 00:14:11.174 { 00:14:11.174 "method": "nvmf_set_config", 00:14:11.174 "params": { 00:14:11.174 "discovery_filter": "match_any", 00:14:11.174 "admin_cmd_passthru": { 00:14:11.174 "identify_ctrlr": false 00:14:11.174 } 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "nvmf_set_max_subsystems", 00:14:11.174 "params": { 00:14:11.174 "max_subsystems": 1024 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "nvmf_set_crdt", 00:14:11.174 "params": { 00:14:11.174 "crdt1": 0, 00:14:11.174 "crdt2": 0, 00:14:11.174 "crdt3": 0 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "nvmf_create_transport", 00:14:11.174 "params": { 00:14:11.174 "trtype": "TCP", 00:14:11.174 "max_queue_depth": 128, 00:14:11.174 "max_io_qpairs_per_ctrlr": 127, 00:14:11.174 "in_capsule_data_size": 4096, 00:14:11.174 "max_io_size": 131072, 00:14:11.174 "io_unit_size": 131072, 00:14:11.174 "max_aq_depth": 128, 00:14:11.174 "num_shared_buffers": 511, 00:14:11.174 "buf_cache_size": 4294967295, 00:14:11.174 "dif_insert_or_strip": false, 00:14:11.174 "zcopy": false, 00:14:11.174 "c2h_success": false, 00:14:11.174 "sock_priority": 0, 00:14:11.174 "abort_timeout_sec": 1, 00:14:11.174 "ack_timeout": 0, 00:14:11.174 "data_wr_pool_size": 0 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "nvmf_create_subsystem", 00:14:11.174 "params": { 00:14:11.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.174 "allow_any_host": false, 00:14:11.174 "serial_number": "00000000000000000000", 00:14:11.174 "model_number": "SPDK bdev Controller", 00:14:11.174 "max_namespaces": 32, 00:14:11.174 "min_cntlid": 1, 00:14:11.174 "max_cntlid": 65519, 00:14:11.174 "ana_reporting": false 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "nvmf_subsystem_add_host", 00:14:11.174 "params": { 00:14:11.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.174 "host": "nqn.2016-06.io.spdk:host1", 00:14:11.174 "psk": "key0" 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.174 "method": "nvmf_subsystem_add_ns", 00:14:11.174 "params": { 00:14:11.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.174 "namespace": { 00:14:11.174 "nsid": 1, 00:14:11.174 "bdev_name": "malloc0", 00:14:11.174 "nguid": "32A79DCF008D4AD19C69A76F167403B4", 00:14:11.174 "uuid": "32a79dcf-008d-4ad1-9c69-a76f167403b4", 00:14:11.174 "no_auto_visible": false 00:14:11.174 } 00:14:11.174 } 00:14:11.174 }, 00:14:11.174 { 00:14:11.175 "method": "nvmf_subsystem_add_listener", 00:14:11.175 "params": { 00:14:11.175 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.175 "listen_address": { 00:14:11.175 "trtype": "TCP", 00:14:11.175 "adrfam": "IPv4", 00:14:11.175 "traddr": "10.0.0.2", 00:14:11.175 "trsvcid": "4420" 00:14:11.175 }, 00:14:11.175 "secure_channel": true 00:14:11.175 } 00:14:11.175 } 00:14:11.175 ] 00:14:11.175 } 00:14:11.175 ] 00:14:11.175 }' 00:14:11.175 12:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:11.434 12:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:11.434 "subsystems": [ 00:14:11.434 { 00:14:11.434 "subsystem": "keyring", 00:14:11.434 "config": [ 00:14:11.434 { 00:14:11.434 "method": "keyring_file_add_key", 00:14:11.434 "params": { 00:14:11.434 "name": "key0", 00:14:11.434 "path": "/tmp/tmp.m8Wdz1ev8E" 00:14:11.434 } 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "subsystem": "iobuf", 00:14:11.434 "config": [ 00:14:11.434 { 00:14:11.434 "method": "iobuf_set_options", 00:14:11.434 "params": { 00:14:11.434 "small_pool_count": 8192, 00:14:11.434 "large_pool_count": 1024, 00:14:11.434 "small_bufsize": 8192, 00:14:11.434 "large_bufsize": 135168 00:14:11.434 } 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "subsystem": "sock", 00:14:11.434 "config": [ 00:14:11.434 { 00:14:11.434 "method": "sock_set_default_impl", 00:14:11.434 "params": { 00:14:11.434 "impl_name": "uring" 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "sock_impl_set_options", 00:14:11.434 "params": { 00:14:11.434 "impl_name": "ssl", 00:14:11.434 "recv_buf_size": 4096, 00:14:11.434 "send_buf_size": 4096, 00:14:11.434 "enable_recv_pipe": true, 00:14:11.434 "enable_quickack": false, 00:14:11.434 "enable_placement_id": 0, 00:14:11.434 "enable_zerocopy_send_server": true, 00:14:11.434 "enable_zerocopy_send_client": false, 00:14:11.434 "zerocopy_threshold": 0, 00:14:11.434 "tls_version": 0, 00:14:11.434 "enable_ktls": false 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "sock_impl_set_options", 00:14:11.434 "params": { 00:14:11.434 "impl_name": "posix", 00:14:11.434 "recv_buf_size": 2097152, 00:14:11.434 "send_buf_size": 2097152, 00:14:11.434 "enable_recv_pipe": true, 00:14:11.434 "enable_quickack": false, 00:14:11.434 "enable_placement_id": 0, 00:14:11.434 "enable_zerocopy_send_server": true, 00:14:11.434 "enable_zerocopy_send_client": false, 00:14:11.434 "zerocopy_threshold": 0, 00:14:11.434 "tls_version": 0, 00:14:11.434 "enable_ktls": false 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "sock_impl_set_options", 00:14:11.434 "params": { 00:14:11.434 "impl_name": "uring", 00:14:11.434 "recv_buf_size": 2097152, 00:14:11.434 "send_buf_size": 2097152, 00:14:11.434 "enable_recv_pipe": true, 00:14:11.434 "enable_quickack": false, 00:14:11.434 "enable_placement_id": 0, 00:14:11.434 "enable_zerocopy_send_server": false, 00:14:11.434 "enable_zerocopy_send_client": false, 00:14:11.434 "zerocopy_threshold": 0, 00:14:11.434 "tls_version": 0, 00:14:11.434 "enable_ktls": false 00:14:11.434 } 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "subsystem": "vmd", 00:14:11.434 "config": [] 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "subsystem": "accel", 00:14:11.434 "config": [ 00:14:11.434 { 00:14:11.434 "method": "accel_set_options", 00:14:11.434 "params": { 00:14:11.434 "small_cache_size": 128, 00:14:11.434 "large_cache_size": 16, 00:14:11.434 "task_count": 2048, 00:14:11.434 "sequence_count": 2048, 00:14:11.434 "buf_count": 2048 00:14:11.434 } 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "subsystem": "bdev", 00:14:11.434 "config": [ 00:14:11.434 { 00:14:11.434 "method": "bdev_set_options", 00:14:11.434 "params": { 00:14:11.434 "bdev_io_pool_size": 65535, 00:14:11.434 "bdev_io_cache_size": 256, 00:14:11.434 "bdev_auto_examine": true, 00:14:11.434 "iobuf_small_cache_size": 128, 00:14:11.434 "iobuf_large_cache_size": 16 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "bdev_raid_set_options", 00:14:11.434 "params": { 00:14:11.434 "process_window_size_kb": 1024 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "bdev_iscsi_set_options", 00:14:11.434 "params": { 00:14:11.434 "timeout_sec": 30 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "bdev_nvme_set_options", 00:14:11.434 "params": { 00:14:11.434 "action_on_timeout": "none", 00:14:11.434 "timeout_us": 0, 00:14:11.434 "timeout_admin_us": 0, 00:14:11.434 "keep_alive_timeout_ms": 10000, 00:14:11.434 "arbitration_burst": 0, 00:14:11.434 "low_priority_weight": 0, 00:14:11.434 "medium_priority_weight": 0, 00:14:11.434 "high_priority_weight": 0, 00:14:11.434 "nvme_adminq_poll_period_us": 10000, 00:14:11.434 "nvme_ioq_poll_period_us": 0, 00:14:11.434 "io_queue_requests": 512, 00:14:11.434 "delay_cmd_submit": true, 00:14:11.434 "transport_retry_count": 4, 00:14:11.434 "bdev_retry_count": 3, 00:14:11.434 "transport_ack_timeout": 0, 00:14:11.434 "ctrlr_loss_timeout_sec": 0, 00:14:11.434 "reconnect_delay_sec": 0, 00:14:11.434 "fast_io_fail_timeout_sec": 0, 00:14:11.434 "disable_auto_failback": false, 00:14:11.434 "generate_uuids": false, 00:14:11.434 "transport_tos": 0, 00:14:11.434 "nvme_error_stat": false, 00:14:11.434 "rdma_srq_size": 0, 00:14:11.434 "io_path_stat": false, 00:14:11.434 "allow_accel_sequence": false, 00:14:11.434 "rdma_max_cq_size": 0, 00:14:11.434 "rdma_cm_event_timeout_ms": 0, 00:14:11.434 "dhchap_digests": [ 00:14:11.434 "sha256", 00:14:11.434 "sha384", 00:14:11.434 "sha512" 00:14:11.434 ], 00:14:11.434 "dhchap_dhgroups": [ 00:14:11.434 "null", 00:14:11.434 "ffdhe2048", 00:14:11.434 "ffdhe3072", 00:14:11.434 "ffdhe4096", 00:14:11.434 "ffdhe6144", 00:14:11.434 "ffdhe8192" 00:14:11.434 ] 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "bdev_nvme_attach_controller", 00:14:11.434 "params": { 00:14:11.434 "name": "nvme0", 00:14:11.434 "trtype": "TCP", 00:14:11.434 "adrfam": "IPv4", 00:14:11.434 "traddr": "10.0.0.2", 00:14:11.434 "trsvcid": "4420", 00:14:11.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.434 "prchk_reftag": false, 00:14:11.434 "prchk_guard": false, 00:14:11.434 "ctrlr_loss_timeout_sec": 0, 00:14:11.434 "reconnect_delay_sec": 0, 00:14:11.434 "fast_io_fail_timeout_sec": 0, 00:14:11.434 "psk": "key0", 00:14:11.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.434 "hdgst": false, 00:14:11.434 "ddgst": false 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "bdev_nvme_set_hotplug", 00:14:11.434 "params": { 00:14:11.434 "period_us": 100000, 00:14:11.434 "enable": false 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "bdev_enable_histogram", 00:14:11.434 "params": { 00:14:11.434 "name": "nvme0n1", 00:14:11.434 "enable": true 00:14:11.434 } 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "method": "bdev_wait_for_examine" 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "subsystem": "nbd", 00:14:11.434 "config": [] 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }' 00:14:11.434 12:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74058 00:14:11.434 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74058 ']' 00:14:11.434 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74058 00:14:11.434 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.434 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.434 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74058 00:14:11.434 killing process with pid 74058 00:14:11.434 Received shutdown signal, test time was about 1.000000 seconds 00:14:11.434 00:14:11.434 Latency(us) 00:14:11.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.435 =================================================================================================================== 00:14:11.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:11.435 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:11.435 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:11.435 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74058' 00:14:11.435 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74058 00:14:11.435 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74058 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74026 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74026 ']' 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74026 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74026 00:14:11.693 killing process with pid 74026 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74026' 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74026 00:14:11.693 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74026 00:14:11.952 12:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:11.952 12:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:11.952 "subsystems": [ 00:14:11.952 { 00:14:11.952 "subsystem": "keyring", 00:14:11.952 "config": [ 00:14:11.952 { 00:14:11.952 "method": "keyring_file_add_key", 00:14:11.952 "params": { 00:14:11.952 "name": "key0", 00:14:11.952 "path": "/tmp/tmp.m8Wdz1ev8E" 00:14:11.952 } 00:14:11.952 } 00:14:11.952 ] 00:14:11.952 }, 00:14:11.952 { 00:14:11.952 "subsystem": "iobuf", 00:14:11.952 "config": [ 00:14:11.952 { 00:14:11.952 "method": "iobuf_set_options", 00:14:11.952 "params": { 00:14:11.952 "small_pool_count": 8192, 00:14:11.952 "large_pool_count": 1024, 00:14:11.952 "small_bufsize": 8192, 00:14:11.952 "large_bufsize": 135168 00:14:11.952 } 00:14:11.952 } 00:14:11.952 ] 00:14:11.952 }, 00:14:11.952 { 00:14:11.952 "subsystem": "sock", 00:14:11.952 "config": [ 00:14:11.952 { 00:14:11.952 "method": "sock_set_default_impl", 00:14:11.952 "params": { 00:14:11.952 "impl_name": "uring" 00:14:11.952 } 00:14:11.952 }, 00:14:11.952 { 00:14:11.952 "method": "sock_impl_set_options", 00:14:11.952 "params": { 00:14:11.952 "impl_name": "ssl", 00:14:11.952 "recv_buf_size": 4096, 00:14:11.952 "send_buf_size": 4096, 00:14:11.952 "enable_recv_pipe": true, 00:14:11.952 "enable_quickack": false, 00:14:11.952 "enable_placement_id": 0, 00:14:11.952 "enable_zerocopy_send_server": true, 00:14:11.952 "enable_zerocopy_send_client": false, 00:14:11.952 "zerocopy_threshold": 0, 00:14:11.952 "tls_version": 0, 00:14:11.952 "enable_ktls": false 00:14:11.952 } 00:14:11.952 }, 00:14:11.952 { 00:14:11.952 "method": "sock_impl_set_options", 00:14:11.952 "params": { 00:14:11.952 "impl_name": "posix", 00:14:11.952 "recv_buf_size": 2097152, 00:14:11.952 "send_buf_size": 2097152, 00:14:11.952 "enable_recv_pipe": true, 00:14:11.952 "enable_quickack": false, 00:14:11.952 "enable_placement_id": 0, 00:14:11.952 "enable_zerocopy_send_server": true, 00:14:11.952 "enable_zerocopy_send_client": false, 00:14:11.952 "zerocopy_threshold": 0, 00:14:11.952 "tls_version": 0, 00:14:11.952 "enable_ktls": false 00:14:11.952 } 00:14:11.952 }, 00:14:11.952 { 00:14:11.952 "method": "sock_impl_set_options", 00:14:11.952 "params": { 00:14:11.952 "impl_name": "uring", 00:14:11.952 "recv_buf_size": 2097152, 00:14:11.952 "send_buf_size": 2097152, 00:14:11.952 "enable_recv_pipe": true, 00:14:11.953 "enable_quickack": false, 00:14:11.953 "enable_placement_id": 0, 00:14:11.953 "enable_zerocopy_send_server": false, 00:14:11.953 "enable_zerocopy_send_client": false, 00:14:11.953 "zerocopy_threshold": 0, 00:14:11.953 "tls_version": 0, 00:14:11.953 "enable_ktls": false 00:14:11.953 } 00:14:11.953 } 00:14:11.953 ] 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "subsystem": "vmd", 00:14:11.953 "config": [] 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "subsystem": "accel", 00:14:11.953 "config": [ 00:14:11.953 { 00:14:11.953 "method": "accel_set_options", 00:14:11.953 "params": { 00:14:11.953 "small_cache_size": 128, 00:14:11.953 "large_cache_size": 16, 00:14:11.953 "task_count": 2048, 00:14:11.953 "sequence_count": 2048, 00:14:11.953 "buf_count": 2048 00:14:11.953 } 00:14:11.953 } 00:14:11.953 ] 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "subsystem": "bdev", 00:14:11.953 "config": [ 00:14:11.953 { 00:14:11.953 "method": "bdev_set_options", 00:14:11.953 "params": { 00:14:11.953 "bdev_io_pool_size": 65535, 00:14:11.953 "bdev_io_cache_size": 256, 00:14:11.953 "bdev_auto_examine": true, 00:14:11.953 "iobuf_small_cache_size": 128, 00:14:11.953 "iobuf_large_cache_size": 16 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "bdev_raid_set_options", 00:14:11.953 "params": { 00:14:11.953 "process_window_size_kb": 1024 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "bdev_iscsi_set_options", 00:14:11.953 "params": { 00:14:11.953 "timeout_sec": 30 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "bdev_nvme_set_options", 00:14:11.953 "params": { 00:14:11.953 "action_on_timeout": "none", 00:14:11.953 "timeout_us": 0, 00:14:11.953 "timeout_admin_us": 0, 00:14:11.953 "keep_alive_timeout_ms": 10000, 00:14:11.953 "arbitration_burst": 0, 00:14:11.953 "low_priority_weight": 0, 00:14:11.953 "medium_priority_weight": 0, 00:14:11.953 "high_priority_weight": 0, 00:14:11.953 "nvme_adminq_poll_period_us": 10000, 00:14:11.953 "nvme_ioq_poll_period_us": 0, 00:14:11.953 "io_queue_requests": 0, 00:14:11.953 "delay_cmd_submit": true, 00:14:11.953 "transport_retry_count": 4, 00:14:11.953 "bdev_retry_count": 3, 00:14:11.953 "transport_ack_timeout": 0, 00:14:11.953 "ctrlr_loss_timeout_sec": 0, 00:14:11.953 "reconnect_delay_sec": 0, 00:14:11.953 "fast_io_fail_timeout_sec": 0, 00:14:11.953 "disable_auto_failback": false, 00:14:11.953 "generate_uuids": false, 00:14:11.953 "transport_tos": 0, 00:14:11.953 "nvme_error_stat": false, 00:14:11.953 "rdma_srq_size": 0, 00:14:11.953 "io_path_stat": false, 00:14:11.953 "allow_accel_sequence": false, 00:14:11.953 "rdma_max_cq_size": 0, 00:14:11.953 "rdma_cm_event_timeout_ms": 0, 00:14:11.953 "dhchap_digests": [ 00:14:11.953 "sha256", 00:14:11.953 "sha384", 00:14:11.953 "sha512" 00:14:11.953 ], 00:14:11.953 "dhchap_dhgroups": [ 00:14:11.953 "null", 00:14:11.953 "ffdhe2048", 00:14:11.953 "ffdhe3072", 00:14:11.953 "ffdhe4096", 00:14:11.953 "ffdhe6144", 00:14:11.953 "ffdhe8192" 00:14:11.953 ] 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "bdev_nvme_set_hotplug", 00:14:11.953 "params": { 00:14:11.953 "period_us": 100000, 00:14:11.953 "enable": false 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "bdev_malloc_create", 00:14:11.953 "params": { 00:14:11.953 "name": "malloc0", 00:14:11.953 "num_blocks": 8192, 00:14:11.953 "block_size": 4096, 00:14:11.953 "physical_block_size": 4096, 00:14:11.953 "uuid": "32a79dcf-008d-4ad1-9c69-a76f167403b4", 00:14:11.953 "optimal_io_boundary": 0 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "bdev_wait_for_examine" 00:14:11.953 } 00:14:11.953 ] 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "subsystem": "nbd", 00:14:11.953 "config": [] 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "subsystem": "scheduler", 00:14:11.953 "config": [ 00:14:11.953 { 00:14:11.953 "method": "framework_set_scheduler", 00:14:11.953 "params": { 00:14:11.953 "name": "static" 00:14:11.953 } 00:14:11.953 } 00:14:11.953 ] 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "subsystem": "nvmf", 00:14:11.953 "config": [ 00:14:11.953 { 00:14:11.953 "method": "nvmf_set_config", 00:14:11.953 "params": { 00:14:11.953 "discovery_filter": "match_any", 00:14:11.953 "admin_cmd_passthru": { 00:14:11.953 "identify_ctrlr": false 00:14:11.953 } 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "nvmf_set_max_subsystems", 00:14:11.953 "params": { 00:14:11.953 "max_subsystems": 1024 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "nvmf_set_crdt", 00:14:11.953 "params": { 00:14:11.953 "crdt1": 0, 00:14:11.953 "crdt2": 0, 00:14:11.953 "crdt3": 0 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "nvmf_create_transport", 00:14:11.953 "params": { 00:14:11.953 "trtype": "TCP", 00:14:11.953 "max_queue_depth": 128, 00:14:11.953 "max_io_qpairs_per_ctrlr": 127, 00:14:11.953 "in_capsule_data_size": 4096, 00:14:11.953 "max_io_size": 131072, 00:14:11.953 "io_unit_size": 131072, 00:14:11.953 "max_aq_depth": 128, 00:14:11.953 "num_shared_buffers": 511, 00:14:11.953 "buf_cache_size": 4294967295, 00:14:11.953 "dif_insert_or_strip": false, 00:14:11.953 "zcopy": false, 00:14:11.953 "c2h_success": false, 00:14:11.953 "sock_priority": 0, 00:14:11.953 "abort_timeout_sec": 1, 00:14:11.953 "ack_timeout": 0, 00:14:11.953 "data_wr_pool_size": 0 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "nvmf_create_subsystem", 00:14:11.953 "params": { 00:14:11.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.953 "allow_any_host": false, 00:14:11.953 "serial_number": "00000000000000000000", 00:14:11.953 "model_number": "SPDK bdev Controller", 00:14:11.953 "max_namespaces": 32, 00:14:11.953 "min_cntlid": 1, 00:14:11.953 "max_cntlid": 65519, 00:14:11.953 "ana_reporting": false 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "nvmf_subsystem_add_host", 00:14:11.953 "params": { 00:14:11.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.953 "host": "nqn.2016-06.io.spdk:host1", 00:14:11.953 "psk": "key0" 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "nvmf_subsystem_add_ns", 00:14:11.953 "params": { 00:14:11.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.953 "namespace": { 00:14:11.953 "nsid": 1, 00:14:11.953 "bdev_name": "malloc0", 00:14:11.953 "nguid": "32A79DCF008D4AD19C69A76F167403B4", 00:14:11.953 "uuid": "32a79dcf-008d-4ad1-9c69-a76f167403b4", 00:14:11.953 "no_auto_visible": false 00:14:11.953 } 00:14:11.953 } 00:14:11.953 }, 00:14:11.953 { 00:14:11.953 "method": "nvmf_subsystem_add_listener", 00:14:11.953 "params": { 00:14:11.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.953 "listen_address": { 00:14:11.953 "trtype": "TCP", 00:14:11.953 "adrfam": "IPv4", 00:14:11.953 "traddr": "10.0.0.2", 00:14:11.953 "trsvcid": "4420" 00:14:11.953 }, 00:14:11.953 "secure_channel": true 00:14:11.953 } 00:14:11.953 } 00:14:11.953 ] 00:14:11.953 } 00:14:11.953 ] 00:14:11.953 }' 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74119 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74119 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74119 ']' 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.953 12:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.953 [2024-07-15 12:56:27.913384] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:11.953 [2024-07-15 12:56:27.913655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.212 [2024-07-15 12:56:28.053709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.212 [2024-07-15 12:56:28.157439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.212 [2024-07-15 12:56:28.157495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.212 [2024-07-15 12:56:28.157507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.212 [2024-07-15 12:56:28.157516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.212 [2024-07-15 12:56:28.157524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.212 [2024-07-15 12:56:28.157614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.470 [2024-07-15 12:56:28.325171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.470 [2024-07-15 12:56:28.400850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.470 [2024-07-15 12:56:28.432772] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:12.470 [2024-07-15 12:56:28.432972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.040 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74151 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74151 /var/tmp/bdevperf.sock 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74151 ']' 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 12:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:13.041 "subsystems": [ 00:14:13.041 { 00:14:13.041 "subsystem": "keyring", 00:14:13.041 "config": [ 00:14:13.041 { 00:14:13.041 "method": "keyring_file_add_key", 00:14:13.041 "params": { 00:14:13.041 "name": "key0", 00:14:13.041 "path": "/tmp/tmp.m8Wdz1ev8E" 00:14:13.041 } 00:14:13.041 } 00:14:13.041 ] 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "subsystem": "iobuf", 00:14:13.041 "config": [ 00:14:13.041 { 00:14:13.041 "method": "iobuf_set_options", 00:14:13.041 "params": { 00:14:13.041 "small_pool_count": 8192, 00:14:13.041 "large_pool_count": 1024, 00:14:13.041 "small_bufsize": 8192, 00:14:13.041 "large_bufsize": 135168 00:14:13.041 } 00:14:13.041 } 00:14:13.041 ] 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "subsystem": "sock", 00:14:13.041 "config": [ 00:14:13.041 { 00:14:13.041 "method": "sock_set_default_impl", 00:14:13.041 "params": { 00:14:13.041 "impl_name": "uring" 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "sock_impl_set_options", 00:14:13.041 "params": { 00:14:13.041 "impl_name": "ssl", 00:14:13.041 "recv_buf_size": 4096, 00:14:13.041 "send_buf_size": 4096, 00:14:13.041 "enable_recv_pipe": true, 00:14:13.041 "enable_quickack": false, 00:14:13.041 "enable_placement_id": 0, 00:14:13.041 "enable_zerocopy_send_server": true, 00:14:13.041 "enable_zerocopy_send_client": false, 00:14:13.041 "zerocopy_threshold": 0, 00:14:13.041 "tls_version": 0, 00:14:13.041 "enable_ktls": false 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "sock_impl_set_options", 00:14:13.041 "params": { 00:14:13.041 "impl_name": "posix", 00:14:13.041 "recv_buf_size": 2097152, 00:14:13.041 "send_buf_size": 2097152, 00:14:13.041 "enable_recv_pipe": true, 00:14:13.041 "enable_quickack": false, 00:14:13.041 "enable_placement_id": 0, 00:14:13.041 "enable_zerocopy_send_server": true, 00:14:13.041 "enable_zerocopy_send_client": false, 00:14:13.041 "zerocopy_threshold": 0, 00:14:13.041 "tls_version": 0, 00:14:13.041 "enable_ktls": false 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "sock_impl_set_options", 00:14:13.041 "params": { 00:14:13.041 "impl_name": "uring", 00:14:13.041 "recv_buf_size": 2097152, 00:14:13.041 "send_buf_size": 2097152, 00:14:13.041 "enable_recv_pipe": true, 00:14:13.041 "enable_quickack": false, 00:14:13.041 "enable_placement_id": 0, 00:14:13.041 "enable_zerocopy_send_server": false, 00:14:13.041 "enable_zerocopy_send_client": false, 00:14:13.041 "zerocopy_threshold": 0, 00:14:13.041 "tls_version": 0, 00:14:13.041 "enable_ktls": false 00:14:13.041 } 00:14:13.041 } 00:14:13.041 ] 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "subsystem": "vmd", 00:14:13.041 "config": [] 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "subsystem": "accel", 00:14:13.041 "config": [ 00:14:13.041 { 00:14:13.041 "method": "accel_set_options", 00:14:13.041 "params": { 00:14:13.041 "small_cache_size": 128, 00:14:13.041 "large_cache_size": 16, 00:14:13.041 "task_count": 2048, 00:14:13.041 "sequence_count": 2048, 00:14:13.041 "buf_count": 2048 00:14:13.041 } 00:14:13.041 } 00:14:13.041 ] 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "subsystem": "bdev", 00:14:13.041 "config": [ 00:14:13.041 { 00:14:13.041 "method": "bdev_set_options", 00:14:13.041 "params": { 00:14:13.041 "bdev_io_pool_size": 65535, 00:14:13.041 "bdev_io_cache_size": 256, 00:14:13.041 "bdev_auto_examine": true, 00:14:13.041 "iobuf_small_cache_size": 128, 00:14:13.041 "iobuf_large_cache_size": 16 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "bdev_raid_set_options", 00:14:13.041 "params": { 00:14:13.041 "process_window_size_kb": 1024 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "bdev_iscsi_set_options", 00:14:13.041 "params": { 00:14:13.041 "timeout_sec": 30 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "bdev_nvme_set_options", 00:14:13.041 "params": { 00:14:13.041 "action_on_timeout": "none", 00:14:13.041 "timeout_us": 0, 00:14:13.041 "timeout_admin_us": 0, 00:14:13.041 "keep_alive_timeout_ms": 10000, 00:14:13.041 "arbitration_burst": 0, 00:14:13.041 "low_priority_weight": 0, 00:14:13.041 "medium_priority_weight": 0, 00:14:13.041 "high_priority_weight": 0, 00:14:13.041 "nvme_adminq_poll_period_us": 10000, 00:14:13.041 "nvme_ioq_poll_period_us": 0, 00:14:13.041 "io_queue_requests": 512, 00:14:13.041 "delay_cmd_submit": true, 00:14:13.041 "transport_retry_count": 4, 00:14:13.041 "bdev_retry_count": 3, 00:14:13.041 "transport_ack_timeout": 0, 00:14:13.041 "ctrlr_loss_timeout_sec": 0, 00:14:13.041 "reconnect_delay_sec": 0, 00:14:13.041 "fast_io_fail_timeout_sec": 0, 00:14:13.041 "disable_auto_failback": false, 00:14:13.041 "generate_uuids": false, 00:14:13.041 "transport_tos": 0, 00:14:13.041 "nvme_error_stat": false, 00:14:13.041 "rdma_srq_size": 0, 00:14:13.041 "io_path_stat": false, 00:14:13.041 "allow_accel_sequence": false, 00:14:13.041 "rdma_max_cq_size": 0, 00:14:13.041 "rdma_cm_event_timeout_ms": 0, 00:14:13.041 "dhchap_digests": [ 00:14:13.041 "sha256", 00:14:13.041 "sha384", 00:14:13.041 "sha512" 00:14:13.041 ], 00:14:13.041 "dhchap_dhgroups": [ 00:14:13.041 "null", 00:14:13.041 "ffdhe2048", 00:14:13.041 "ffdhe3072", 00:14:13.041 "ffdhe4096", 00:14:13.041 "ffdhe6144", 00:14:13.041 "ffdhe8192" 00:14:13.041 ] 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "bdev_nvme_attach_controller", 00:14:13.041 "params": { 00:14:13.041 "name": "nvme0", 00:14:13.041 "trtype": "TCP", 00:14:13.041 "adrfam": "IPv4", 00:14:13.041 "traddr": "10.0.0.2", 00:14:13.041 "trsvcid": "4420", 00:14:13.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.041 "prchk_reftag": false, 00:14:13.041 "prchk_guard": false, 00:14:13.041 "ctrlr_loss_timeout_sec": 0, 00:14:13.041 "reconnect_delay_sec": 0, 00:14:13.041 "fast_io_fail_timeout_sec": 0, 00:14:13.041 "psk": "key0", 00:14:13.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.041 "hdgst": false, 00:14:13.041 "ddgst": false 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "bdev_nvme_set_hotplug", 00:14:13.041 "params": { 00:14:13.041 "period_us": 100000, 00:14:13.041 "enable": false 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "bdev_enable_histogram", 00:14:13.041 "params": { 00:14:13.041 "name": "nvme0n1", 00:14:13.041 "enable": true 00:14:13.041 } 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "method": "bdev_wait_for_examine" 00:14:13.041 } 00:14:13.041 ] 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "subsystem": "nbd", 00:14:13.041 "config": [] 00:14:13.041 } 00:14:13.041 ] 00:14:13.041 }' 00:14:13.042 [2024-07-15 12:56:28.996792] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:13.042 [2024-07-15 12:56:28.997056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74151 ] 00:14:13.301 [2024-07-15 12:56:29.137239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.301 [2024-07-15 12:56:29.259483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.559 [2024-07-15 12:56:29.394648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.559 [2024-07-15 12:56:29.441784] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.126 12:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.126 12:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:14.126 12:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:14.126 12:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:14.126 12:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.126 12:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:14.383 Running I/O for 1 seconds... 00:14:15.318 00:14:15.318 Latency(us) 00:14:15.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.318 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:15.318 Verification LBA range: start 0x0 length 0x2000 00:14:15.318 nvme0n1 : 1.03 3861.94 15.09 0.00 0.00 32773.04 9175.04 21686.46 00:14:15.318 =================================================================================================================== 00:14:15.318 Total : 3861.94 15.09 0.00 0.00 32773.04 9175.04 21686.46 00:14:15.318 0 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:15.318 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:15.318 nvmf_trace.0 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74151 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74151 ']' 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74151 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74151 00:14:15.577 killing process with pid 74151 00:14:15.577 Received shutdown signal, test time was about 1.000000 seconds 00:14:15.577 00:14:15.577 Latency(us) 00:14:15.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.577 =================================================================================================================== 00:14:15.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74151' 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74151 00:14:15.577 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74151 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.837 rmmod nvme_tcp 00:14:15.837 rmmod nvme_fabrics 00:14:15.837 rmmod nvme_keyring 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74119 ']' 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74119 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74119 ']' 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74119 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74119 00:14:15.837 killing process with pid 74119 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74119' 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74119 00:14:15.837 12:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74119 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.q0GHGL75iV /tmp/tmp.pr8aFiVbXu /tmp/tmp.m8Wdz1ev8E 00:14:16.096 ************************************ 00:14:16.096 END TEST nvmf_tls 00:14:16.096 ************************************ 00:14:16.096 00:14:16.096 real 1m26.881s 00:14:16.096 user 2m18.868s 00:14:16.096 sys 0m27.449s 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:16.096 12:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.096 12:56:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:16.096 12:56:32 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:16.096 12:56:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:16.096 12:56:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.096 12:56:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:16.096 ************************************ 00:14:16.096 START TEST nvmf_fips 00:14:16.096 ************************************ 00:14:16.096 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:16.356 * Looking for test storage... 00:14:16.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:16.356 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:16.357 Error setting digest 00:14:16.357 00D2233C2B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:16.357 00D2233C2B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.357 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:16.617 Cannot find device "nvmf_tgt_br" 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.617 Cannot find device "nvmf_tgt_br2" 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:16.617 Cannot find device "nvmf_tgt_br" 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:16.617 Cannot find device "nvmf_tgt_br2" 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.617 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:16.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:16.877 00:14:16.877 --- 10.0.0.2 ping statistics --- 00:14:16.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.877 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:16.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:16.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:16.877 00:14:16.877 --- 10.0.0.3 ping statistics --- 00:14:16.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.877 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:16.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:14:16.877 00:14:16.877 --- 10.0.0.1 ping statistics --- 00:14:16.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.877 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:16.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74421 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74421 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74421 ']' 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.877 12:56:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:16.877 [2024-07-15 12:56:32.865814] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:16.877 [2024-07-15 12:56:32.866176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.136 [2024-07-15 12:56:32.999338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.137 [2024-07-15 12:56:33.098505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.137 [2024-07-15 12:56:33.098730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.137 [2024-07-15 12:56:33.098904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.137 [2024-07-15 12:56:33.098961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.137 [2024-07-15 12:56:33.098992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.137 [2024-07-15 12:56:33.099106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.137 [2024-07-15 12:56:33.150906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:18.071 12:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:18.071 [2024-07-15 12:56:34.090815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.071 [2024-07-15 12:56:34.106719] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:18.071 [2024-07-15 12:56:34.106891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.330 [2024-07-15 12:56:34.137464] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:18.330 malloc0 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74455 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74455 /var/tmp/bdevperf.sock 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74455 ']' 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.330 12:56:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.330 [2024-07-15 12:56:34.248757] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:18.330 [2024-07-15 12:56:34.249137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74455 ] 00:14:18.330 [2024-07-15 12:56:34.388203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.589 [2024-07-15 12:56:34.504188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.589 [2024-07-15 12:56:34.559332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:19.155 12:56:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.155 12:56:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:19.155 12:56:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:19.414 [2024-07-15 12:56:35.429031] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.414 [2024-07-15 12:56:35.429141] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:19.672 TLSTESTn1 00:14:19.672 12:56:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.672 Running I/O for 10 seconds... 00:14:29.677 00:14:29.677 Latency(us) 00:14:29.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.677 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:29.677 Verification LBA range: start 0x0 length 0x2000 00:14:29.677 TLSTESTn1 : 10.02 4013.61 15.68 0.00 0.00 31822.05 2651.23 26810.18 00:14:29.677 =================================================================================================================== 00:14:29.677 Total : 4013.61 15.68 0.00 0.00 31822.05 2651.23 26810.18 00:14:29.677 0 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:29.677 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:29.677 nvmf_trace.0 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74455 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74455 ']' 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74455 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74455 00:14:29.935 killing process with pid 74455 00:14:29.935 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.935 00:14:29.935 Latency(us) 00:14:29.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.935 =================================================================================================================== 00:14:29.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74455' 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74455 00:14:29.935 [2024-07-15 12:56:45.785230] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:29.935 12:56:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74455 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.193 rmmod nvme_tcp 00:14:30.193 rmmod nvme_fabrics 00:14:30.193 rmmod nvme_keyring 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74421 ']' 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74421 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74421 ']' 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74421 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74421 00:14:30.193 killing process with pid 74421 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74421' 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74421 00:14:30.193 [2024-07-15 12:56:46.117670] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:30.193 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74421 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:30.451 ************************************ 00:14:30.451 END TEST nvmf_fips 00:14:30.451 ************************************ 00:14:30.451 00:14:30.451 real 0m14.234s 00:14:30.451 user 0m19.322s 00:14:30.451 sys 0m5.784s 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.451 12:56:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.451 12:56:46 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:30.451 12:56:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:30.451 12:56:46 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.451 12:56:46 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.451 12:56:46 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:30.451 12:56:46 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.451 12:56:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.451 ************************************ 00:14:30.451 START TEST nvmf_identify 00:14:30.451 ************************************ 00:14:30.451 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:30.710 * Looking for test storage... 00:14:30.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:30.710 Cannot find device "nvmf_tgt_br" 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.710 Cannot find device "nvmf_tgt_br2" 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:30.710 Cannot find device "nvmf_tgt_br" 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:30.710 Cannot find device "nvmf_tgt_br2" 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:30.710 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:30.711 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:30.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:30.969 00:14:30.969 --- 10.0.0.2 ping statistics --- 00:14:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.969 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:30.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:30.969 00:14:30.969 --- 10.0.0.3 ping statistics --- 00:14:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.969 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:30.969 00:14:30.969 --- 10.0.0.1 ping statistics --- 00:14:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.969 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74805 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74805 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74805 ']' 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.969 12:56:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.969 [2024-07-15 12:56:46.939823] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:30.969 [2024-07-15 12:56:46.939892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.228 [2024-07-15 12:56:47.074774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.228 [2024-07-15 12:56:47.179974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.228 [2024-07-15 12:56:47.180166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.228 [2024-07-15 12:56:47.180326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.228 [2024-07-15 12:56:47.180585] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.228 [2024-07-15 12:56:47.180741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.228 [2024-07-15 12:56:47.180960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.228 [2024-07-15 12:56:47.181043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.228 [2024-07-15 12:56:47.183395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.228 [2024-07-15 12:56:47.183404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.228 [2024-07-15 12:56:47.235383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 [2024-07-15 12:56:47.969050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.162 12:56:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 Malloc0 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 [2024-07-15 12:56:48.070518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 [ 00:14:32.162 { 00:14:32.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:32.162 "subtype": "Discovery", 00:14:32.162 "listen_addresses": [ 00:14:32.162 { 00:14:32.162 "trtype": "TCP", 00:14:32.162 "adrfam": "IPv4", 00:14:32.162 "traddr": "10.0.0.2", 00:14:32.162 "trsvcid": "4420" 00:14:32.162 } 00:14:32.162 ], 00:14:32.162 "allow_any_host": true, 00:14:32.162 "hosts": [] 00:14:32.162 }, 00:14:32.162 { 00:14:32.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.162 "subtype": "NVMe", 00:14:32.162 "listen_addresses": [ 00:14:32.162 { 00:14:32.162 "trtype": "TCP", 00:14:32.162 "adrfam": "IPv4", 00:14:32.162 "traddr": "10.0.0.2", 00:14:32.162 "trsvcid": "4420" 00:14:32.162 } 00:14:32.162 ], 00:14:32.162 "allow_any_host": true, 00:14:32.162 "hosts": [], 00:14:32.162 "serial_number": "SPDK00000000000001", 00:14:32.162 "model_number": "SPDK bdev Controller", 00:14:32.162 "max_namespaces": 32, 00:14:32.162 "min_cntlid": 1, 00:14:32.162 "max_cntlid": 65519, 00:14:32.162 "namespaces": [ 00:14:32.162 { 00:14:32.162 "nsid": 1, 00:14:32.162 "bdev_name": "Malloc0", 00:14:32.162 "name": "Malloc0", 00:14:32.162 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:32.162 "eui64": "ABCDEF0123456789", 00:14:32.162 "uuid": "332f648a-b5a4-4192-8529-38a0f659a7f4" 00:14:32.162 } 00:14:32.162 ] 00:14:32.162 } 00:14:32.162 ] 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.162 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:32.162 [2024-07-15 12:56:48.125059] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:32.162 [2024-07-15 12:56:48.125259] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74840 ] 00:14:32.423 [2024-07-15 12:56:48.269980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:32.423 [2024-07-15 12:56:48.270059] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:32.423 [2024-07-15 12:56:48.270067] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:32.423 [2024-07-15 12:56:48.270080] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:32.423 [2024-07-15 12:56:48.270088] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:32.423 [2024-07-15 12:56:48.270223] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:32.423 [2024-07-15 12:56:48.270276] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c7c2c0 0 00:14:32.423 [2024-07-15 12:56:48.282379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:32.423 [2024-07-15 12:56:48.282408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:32.423 [2024-07-15 12:56:48.282414] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:32.423 [2024-07-15 12:56:48.282418] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:32.423 [2024-07-15 12:56:48.282467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.282475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.282480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.423 [2024-07-15 12:56:48.282494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:32.423 [2024-07-15 12:56:48.282535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.423 [2024-07-15 12:56:48.290384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.423 [2024-07-15 12:56:48.290415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.423 [2024-07-15 12:56:48.290424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.423 [2024-07-15 12:56:48.290454] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:32.423 [2024-07-15 12:56:48.290467] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:32.423 [2024-07-15 12:56:48.290477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:32.423 [2024-07-15 12:56:48.290504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.423 [2024-07-15 12:56:48.290534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.423 [2024-07-15 12:56:48.290574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.423 [2024-07-15 12:56:48.290637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.423 [2024-07-15 12:56:48.290650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.423 [2024-07-15 12:56:48.290657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.423 [2024-07-15 12:56:48.290673] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:32.423 [2024-07-15 12:56:48.290685] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:32.423 [2024-07-15 12:56:48.290699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.423 [2024-07-15 12:56:48.290727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.423 [2024-07-15 12:56:48.290764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.423 [2024-07-15 12:56:48.290820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.423 [2024-07-15 12:56:48.290832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.423 [2024-07-15 12:56:48.290839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.423 [2024-07-15 12:56:48.290855] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:32.423 [2024-07-15 12:56:48.290869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:32.423 [2024-07-15 12:56:48.290883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.290897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.423 [2024-07-15 12:56:48.290920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.423 [2024-07-15 12:56:48.290953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.423 [2024-07-15 12:56:48.290998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.423 [2024-07-15 12:56:48.291010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.423 [2024-07-15 12:56:48.291018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.423 [2024-07-15 12:56:48.291025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.423 [2024-07-15 12:56:48.291035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:32.423 [2024-07-15 12:56:48.291053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.291079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.424 [2024-07-15 12:56:48.291111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.424 [2024-07-15 12:56:48.291156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.424 [2024-07-15 12:56:48.291169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.424 [2024-07-15 12:56:48.291175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.424 [2024-07-15 12:56:48.291194] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:32.424 [2024-07-15 12:56:48.291203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:32.424 [2024-07-15 12:56:48.291217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:32.424 [2024-07-15 12:56:48.291326] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:32.424 [2024-07-15 12:56:48.291345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:32.424 [2024-07-15 12:56:48.291374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.291404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.424 [2024-07-15 12:56:48.291433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.424 [2024-07-15 12:56:48.291485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.424 [2024-07-15 12:56:48.291493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.424 [2024-07-15 12:56:48.291497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.424 [2024-07-15 12:56:48.291507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:32.424 [2024-07-15 12:56:48.291519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.291535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.424 [2024-07-15 12:56:48.291553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.424 [2024-07-15 12:56:48.291600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.424 [2024-07-15 12:56:48.291607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.424 [2024-07-15 12:56:48.291611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.424 [2024-07-15 12:56:48.291620] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:32.424 [2024-07-15 12:56:48.291625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:32.424 [2024-07-15 12:56:48.291634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:32.424 [2024-07-15 12:56:48.291646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:32.424 [2024-07-15 12:56:48.291659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.291672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.424 [2024-07-15 12:56:48.291692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.424 [2024-07-15 12:56:48.291779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.424 [2024-07-15 12:56:48.291787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.424 [2024-07-15 12:56:48.291791] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291795] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7c2c0): datao=0, datal=4096, cccid=0 00:14:32.424 [2024-07-15 12:56:48.291800] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbd940) on tqpair(0x1c7c2c0): expected_datao=0, payload_size=4096 00:14:32.424 [2024-07-15 12:56:48.291805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291814] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291819] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.424 [2024-07-15 12:56:48.291834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.424 [2024-07-15 12:56:48.291838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.424 [2024-07-15 12:56:48.291852] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:32.424 [2024-07-15 12:56:48.291858] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:32.424 [2024-07-15 12:56:48.291863] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:32.424 [2024-07-15 12:56:48.291868] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:32.424 [2024-07-15 12:56:48.291873] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:32.424 [2024-07-15 12:56:48.291879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:32.424 [2024-07-15 12:56:48.291888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:32.424 [2024-07-15 12:56:48.291896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.291905] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.291913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:32.424 [2024-07-15 12:56:48.291933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.424 [2024-07-15 12:56:48.291992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.424 [2024-07-15 12:56:48.291999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.424 [2024-07-15 12:56:48.292004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.424 [2024-07-15 12:56:48.292016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.292032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.424 [2024-07-15 12:56:48.292038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.292053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.424 [2024-07-15 12:56:48.292059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.292073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.424 [2024-07-15 12:56:48.292080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.292094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.424 [2024-07-15 12:56:48.292099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:32.424 [2024-07-15 12:56:48.292114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:32.424 [2024-07-15 12:56:48.292122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.292134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.424 [2024-07-15 12:56:48.292155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbd940, cid 0, qid 0 00:14:32.424 [2024-07-15 12:56:48.292162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbdac0, cid 1, qid 0 00:14:32.424 [2024-07-15 12:56:48.292167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbdc40, cid 2, qid 0 00:14:32.424 [2024-07-15 12:56:48.292172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.424 [2024-07-15 12:56:48.292177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbdf40, cid 4, qid 0 00:14:32.424 [2024-07-15 12:56:48.292263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.424 [2024-07-15 12:56:48.292270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.424 [2024-07-15 12:56:48.292274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbdf40) on tqpair=0x1c7c2c0 00:14:32.424 [2024-07-15 12:56:48.292284] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:32.424 [2024-07-15 12:56:48.292294] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:32.424 [2024-07-15 12:56:48.292307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7c2c0) 00:14:32.424 [2024-07-15 12:56:48.292320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.424 [2024-07-15 12:56:48.292339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbdf40, cid 4, qid 0 00:14:32.424 [2024-07-15 12:56:48.292418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.424 [2024-07-15 12:56:48.292429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.424 [2024-07-15 12:56:48.292433] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292448] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7c2c0): datao=0, datal=4096, cccid=4 00:14:32.424 [2024-07-15 12:56:48.292454] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbdf40) on tqpair(0x1c7c2c0): expected_datao=0, payload_size=4096 00:14:32.424 [2024-07-15 12:56:48.292459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292467] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.424 [2024-07-15 12:56:48.292471] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.425 [2024-07-15 12:56:48.292487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.425 [2024-07-15 12:56:48.292491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbdf40) on tqpair=0x1c7c2c0 00:14:32.425 [2024-07-15 12:56:48.292510] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:32.425 [2024-07-15 12:56:48.292543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7c2c0) 00:14:32.425 [2024-07-15 12:56:48.292558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.425 [2024-07-15 12:56:48.292566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c7c2c0) 00:14:32.425 [2024-07-15 12:56:48.292580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.425 [2024-07-15 12:56:48.292609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbdf40, cid 4, qid 0 00:14:32.425 [2024-07-15 12:56:48.292617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbe0c0, cid 5, qid 0 00:14:32.425 [2024-07-15 12:56:48.292724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.425 [2024-07-15 12:56:48.292731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.425 [2024-07-15 12:56:48.292735] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292739] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7c2c0): datao=0, datal=1024, cccid=4 00:14:32.425 [2024-07-15 12:56:48.292744] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbdf40) on tqpair(0x1c7c2c0): expected_datao=0, payload_size=1024 00:14:32.425 [2024-07-15 12:56:48.292749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292756] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292760] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.425 [2024-07-15 12:56:48.292772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.425 [2024-07-15 12:56:48.292776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbe0c0) on tqpair=0x1c7c2c0 00:14:32.425 [2024-07-15 12:56:48.292798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.425 [2024-07-15 12:56:48.292806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.425 [2024-07-15 12:56:48.292810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbdf40) on tqpair=0x1c7c2c0 00:14:32.425 [2024-07-15 12:56:48.292827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7c2c0) 00:14:32.425 [2024-07-15 12:56:48.292839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.425 [2024-07-15 12:56:48.292863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbdf40, cid 4, qid 0 00:14:32.425 [2024-07-15 12:56:48.292929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.425 [2024-07-15 12:56:48.292936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.425 [2024-07-15 12:56:48.292940] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292944] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7c2c0): datao=0, datal=3072, cccid=4 00:14:32.425 [2024-07-15 12:56:48.292949] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbdf40) on tqpair(0x1c7c2c0): expected_datao=0, payload_size=3072 00:14:32.425 [2024-07-15 12:56:48.292953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292961] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292965] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.425 [2024-07-15 12:56:48.292979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.425 [2024-07-15 12:56:48.292983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.292987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbdf40) on tqpair=0x1c7c2c0 00:14:32.425 [2024-07-15 12:56:48.292998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.293003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7c2c0) 00:14:32.425 [2024-07-15 12:56:48.293010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.425 [2024-07-15 12:56:48.293033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbdf40, cid 4, qid 0 00:14:32.425 [2024-07-15 12:56:48.293101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.425 [2024-07-15 12:56:48.293107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.425 [2024-07-15 12:56:48.293111] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.293115] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7c2c0): datao=0, datal=8, cccid=4 00:14:32.425 [2024-07-15 12:56:48.293120] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbdf40) on tqpair(0x1c7c2c0): expected_datao=0, payload_size=8 00:14:32.425 [2024-07-15 12:56:48.293125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.293132] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.293136] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.293150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.425 [2024-07-15 12:56:48.293158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.425 [2024-07-15 12:56:48.293161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.425 [2024-07-15 12:56:48.293166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbdf40) on tqpair=0x1c7c2c0 00:14:32.425 ===================================================== 00:14:32.425 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:32.425 ===================================================== 00:14:32.425 Controller Capabilities/Features 00:14:32.425 ================================ 00:14:32.425 Vendor ID: 0000 00:14:32.425 Subsystem Vendor ID: 0000 00:14:32.425 Serial Number: .................... 00:14:32.425 Model Number: ........................................ 00:14:32.425 Firmware Version: 24.09 00:14:32.425 Recommended Arb Burst: 0 00:14:32.425 IEEE OUI Identifier: 00 00 00 00:14:32.425 Multi-path I/O 00:14:32.425 May have multiple subsystem ports: No 00:14:32.425 May have multiple controllers: No 00:14:32.425 Associated with SR-IOV VF: No 00:14:32.425 Max Data Transfer Size: 131072 00:14:32.425 Max Number of Namespaces: 0 00:14:32.425 Max Number of I/O Queues: 1024 00:14:32.425 NVMe Specification Version (VS): 1.3 00:14:32.425 NVMe Specification Version (Identify): 1.3 00:14:32.425 Maximum Queue Entries: 128 00:14:32.425 Contiguous Queues Required: Yes 00:14:32.425 Arbitration Mechanisms Supported 00:14:32.425 Weighted Round Robin: Not Supported 00:14:32.425 Vendor Specific: Not Supported 00:14:32.425 Reset Timeout: 15000 ms 00:14:32.425 Doorbell Stride: 4 bytes 00:14:32.425 NVM Subsystem Reset: Not Supported 00:14:32.425 Command Sets Supported 00:14:32.425 NVM Command Set: Supported 00:14:32.425 Boot Partition: Not Supported 00:14:32.425 Memory Page Size Minimum: 4096 bytes 00:14:32.425 Memory Page Size Maximum: 4096 bytes 00:14:32.425 Persistent Memory Region: Not Supported 00:14:32.425 Optional Asynchronous Events Supported 00:14:32.425 Namespace Attribute Notices: Not Supported 00:14:32.425 Firmware Activation Notices: Not Supported 00:14:32.425 ANA Change Notices: Not Supported 00:14:32.425 PLE Aggregate Log Change Notices: Not Supported 00:14:32.425 LBA Status Info Alert Notices: Not Supported 00:14:32.425 EGE Aggregate Log Change Notices: Not Supported 00:14:32.425 Normal NVM Subsystem Shutdown event: Not Supported 00:14:32.425 Zone Descriptor Change Notices: Not Supported 00:14:32.425 Discovery Log Change Notices: Supported 00:14:32.425 Controller Attributes 00:14:32.425 128-bit Host Identifier: Not Supported 00:14:32.425 Non-Operational Permissive Mode: Not Supported 00:14:32.425 NVM Sets: Not Supported 00:14:32.425 Read Recovery Levels: Not Supported 00:14:32.425 Endurance Groups: Not Supported 00:14:32.425 Predictable Latency Mode: Not Supported 00:14:32.425 Traffic Based Keep ALive: Not Supported 00:14:32.425 Namespace Granularity: Not Supported 00:14:32.425 SQ Associations: Not Supported 00:14:32.425 UUID List: Not Supported 00:14:32.425 Multi-Domain Subsystem: Not Supported 00:14:32.425 Fixed Capacity Management: Not Supported 00:14:32.425 Variable Capacity Management: Not Supported 00:14:32.425 Delete Endurance Group: Not Supported 00:14:32.425 Delete NVM Set: Not Supported 00:14:32.425 Extended LBA Formats Supported: Not Supported 00:14:32.425 Flexible Data Placement Supported: Not Supported 00:14:32.425 00:14:32.425 Controller Memory Buffer Support 00:14:32.425 ================================ 00:14:32.425 Supported: No 00:14:32.425 00:14:32.425 Persistent Memory Region Support 00:14:32.425 ================================ 00:14:32.425 Supported: No 00:14:32.425 00:14:32.425 Admin Command Set Attributes 00:14:32.425 ============================ 00:14:32.425 Security Send/Receive: Not Supported 00:14:32.425 Format NVM: Not Supported 00:14:32.425 Firmware Activate/Download: Not Supported 00:14:32.425 Namespace Management: Not Supported 00:14:32.425 Device Self-Test: Not Supported 00:14:32.425 Directives: Not Supported 00:14:32.425 NVMe-MI: Not Supported 00:14:32.425 Virtualization Management: Not Supported 00:14:32.425 Doorbell Buffer Config: Not Supported 00:14:32.425 Get LBA Status Capability: Not Supported 00:14:32.425 Command & Feature Lockdown Capability: Not Supported 00:14:32.425 Abort Command Limit: 1 00:14:32.425 Async Event Request Limit: 4 00:14:32.425 Number of Firmware Slots: N/A 00:14:32.425 Firmware Slot 1 Read-Only: N/A 00:14:32.425 Firmware Activation Without Reset: N/A 00:14:32.425 Multiple Update Detection Support: N/A 00:14:32.425 Firmware Update Granularity: No Information Provided 00:14:32.425 Per-Namespace SMART Log: No 00:14:32.425 Asymmetric Namespace Access Log Page: Not Supported 00:14:32.425 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:32.425 Command Effects Log Page: Not Supported 00:14:32.425 Get Log Page Extended Data: Supported 00:14:32.425 Telemetry Log Pages: Not Supported 00:14:32.425 Persistent Event Log Pages: Not Supported 00:14:32.425 Supported Log Pages Log Page: May Support 00:14:32.425 Commands Supported & Effects Log Page: Not Supported 00:14:32.425 Feature Identifiers & Effects Log Page:May Support 00:14:32.425 NVMe-MI Commands & Effects Log Page: May Support 00:14:32.425 Data Area 4 for Telemetry Log: Not Supported 00:14:32.425 Error Log Page Entries Supported: 128 00:14:32.425 Keep Alive: Not Supported 00:14:32.425 00:14:32.425 NVM Command Set Attributes 00:14:32.426 ========================== 00:14:32.426 Submission Queue Entry Size 00:14:32.426 Max: 1 00:14:32.426 Min: 1 00:14:32.426 Completion Queue Entry Size 00:14:32.426 Max: 1 00:14:32.426 Min: 1 00:14:32.426 Number of Namespaces: 0 00:14:32.426 Compare Command: Not Supported 00:14:32.426 Write Uncorrectable Command: Not Supported 00:14:32.426 Dataset Management Command: Not Supported 00:14:32.426 Write Zeroes Command: Not Supported 00:14:32.426 Set Features Save Field: Not Supported 00:14:32.426 Reservations: Not Supported 00:14:32.426 Timestamp: Not Supported 00:14:32.426 Copy: Not Supported 00:14:32.426 Volatile Write Cache: Not Present 00:14:32.426 Atomic Write Unit (Normal): 1 00:14:32.426 Atomic Write Unit (PFail): 1 00:14:32.426 Atomic Compare & Write Unit: 1 00:14:32.426 Fused Compare & Write: Supported 00:14:32.426 Scatter-Gather List 00:14:32.426 SGL Command Set: Supported 00:14:32.426 SGL Keyed: Supported 00:14:32.426 SGL Bit Bucket Descriptor: Not Supported 00:14:32.426 SGL Metadata Pointer: Not Supported 00:14:32.426 Oversized SGL: Not Supported 00:14:32.426 SGL Metadata Address: Not Supported 00:14:32.426 SGL Offset: Supported 00:14:32.426 Transport SGL Data Block: Not Supported 00:14:32.426 Replay Protected Memory Block: Not Supported 00:14:32.426 00:14:32.426 Firmware Slot Information 00:14:32.426 ========================= 00:14:32.426 Active slot: 0 00:14:32.426 00:14:32.426 00:14:32.426 Error Log 00:14:32.426 ========= 00:14:32.426 00:14:32.426 Active Namespaces 00:14:32.426 ================= 00:14:32.426 Discovery Log Page 00:14:32.426 ================== 00:14:32.426 Generation Counter: 2 00:14:32.426 Number of Records: 2 00:14:32.426 Record Format: 0 00:14:32.426 00:14:32.426 Discovery Log Entry 0 00:14:32.426 ---------------------- 00:14:32.426 Transport Type: 3 (TCP) 00:14:32.426 Address Family: 1 (IPv4) 00:14:32.426 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:32.426 Entry Flags: 00:14:32.426 Duplicate Returned Information: 1 00:14:32.426 Explicit Persistent Connection Support for Discovery: 1 00:14:32.426 Transport Requirements: 00:14:32.426 Secure Channel: Not Required 00:14:32.426 Port ID: 0 (0x0000) 00:14:32.426 Controller ID: 65535 (0xffff) 00:14:32.426 Admin Max SQ Size: 128 00:14:32.426 Transport Service Identifier: 4420 00:14:32.426 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:32.426 Transport Address: 10.0.0.2 00:14:32.426 Discovery Log Entry 1 00:14:32.426 ---------------------- 00:14:32.426 Transport Type: 3 (TCP) 00:14:32.426 Address Family: 1 (IPv4) 00:14:32.426 Subsystem Type: 2 (NVM Subsystem) 00:14:32.426 Entry Flags: 00:14:32.426 Duplicate Returned Information: 0 00:14:32.426 Explicit Persistent Connection Support for Discovery: 0 00:14:32.426 Transport Requirements: 00:14:32.426 Secure Channel: Not Required 00:14:32.426 Port ID: 0 (0x0000) 00:14:32.426 Controller ID: 65535 (0xffff) 00:14:32.426 Admin Max SQ Size: 128 00:14:32.426 Transport Service Identifier: 4420 00:14:32.426 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:32.426 Transport Address: 10.0.0.2 [2024-07-15 12:56:48.293299] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:32.426 [2024-07-15 12:56:48.293318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbd940) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.426 [2024-07-15 12:56:48.293333] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbdac0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.426 [2024-07-15 12:56:48.293343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbdc40) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.426 [2024-07-15 12:56:48.293354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.426 [2024-07-15 12:56:48.293394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.426 [2024-07-15 12:56:48.293421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.426 [2024-07-15 12:56:48.293457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.426 [2024-07-15 12:56:48.293520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.426 [2024-07-15 12:56:48.293528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.426 [2024-07-15 12:56:48.293532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.426 [2024-07-15 12:56:48.293562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.426 [2024-07-15 12:56:48.293585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.426 [2024-07-15 12:56:48.293650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.426 [2024-07-15 12:56:48.293657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.426 [2024-07-15 12:56:48.293661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293670] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:32.426 [2024-07-15 12:56:48.293675] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:32.426 [2024-07-15 12:56:48.293686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.426 [2024-07-15 12:56:48.293702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.426 [2024-07-15 12:56:48.293721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.426 [2024-07-15 12:56:48.293775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.426 [2024-07-15 12:56:48.293782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.426 [2024-07-15 12:56:48.293786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.426 [2024-07-15 12:56:48.293818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.426 [2024-07-15 12:56:48.293835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.426 [2024-07-15 12:56:48.293884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.426 [2024-07-15 12:56:48.293891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.426 [2024-07-15 12:56:48.293895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.293910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.293919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.426 [2024-07-15 12:56:48.293926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.426 [2024-07-15 12:56:48.293943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.426 [2024-07-15 12:56:48.293988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.426 [2024-07-15 12:56:48.293995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.426 [2024-07-15 12:56:48.293999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.294003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.294014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.294018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.294022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.426 [2024-07-15 12:56:48.294030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.426 [2024-07-15 12:56:48.294047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.426 [2024-07-15 12:56:48.294092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.426 [2024-07-15 12:56:48.294099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.426 [2024-07-15 12:56:48.294103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.294107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.294117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.294122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.294126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.426 [2024-07-15 12:56:48.294133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.426 [2024-07-15 12:56:48.294150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.426 [2024-07-15 12:56:48.294196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.426 [2024-07-15 12:56:48.294203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.426 [2024-07-15 12:56:48.294207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.426 [2024-07-15 12:56:48.294211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.426 [2024-07-15 12:56:48.294222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.294227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.294230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.427 [2024-07-15 12:56:48.294238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.427 [2024-07-15 12:56:48.294255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.427 [2024-07-15 12:56:48.294300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.427 [2024-07-15 12:56:48.294307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.427 [2024-07-15 12:56:48.294311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.294315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.427 [2024-07-15 12:56:48.294325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.294330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.294334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.427 [2024-07-15 12:56:48.294342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.427 [2024-07-15 12:56:48.298378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.427 [2024-07-15 12:56:48.298411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.427 [2024-07-15 12:56:48.298420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.427 [2024-07-15 12:56:48.298424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.298429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.427 [2024-07-15 12:56:48.298444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.298450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.298454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7c2c0) 00:14:32.427 [2024-07-15 12:56:48.298463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.427 [2024-07-15 12:56:48.298509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbddc0, cid 3, qid 0 00:14:32.427 [2024-07-15 12:56:48.298566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.427 [2024-07-15 12:56:48.298574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.427 [2024-07-15 12:56:48.298578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.427 [2024-07-15 12:56:48.298582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbddc0) on tqpair=0x1c7c2c0 00:14:32.427 [2024-07-15 12:56:48.298591] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:32.427 00:14:32.427 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:32.427 [2024-07-15 12:56:48.340035] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:32.427 [2024-07-15 12:56:48.340079] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74846 ] 00:14:32.427 [2024-07-15 12:56:48.477454] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:32.427 [2024-07-15 12:56:48.477523] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:32.427 [2024-07-15 12:56:48.477531] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:32.427 [2024-07-15 12:56:48.477542] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:32.427 [2024-07-15 12:56:48.477549] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:32.427 [2024-07-15 12:56:48.477677] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:32.427 [2024-07-15 12:56:48.477728] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdc02c0 0 00:14:32.691 [2024-07-15 12:56:48.490390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:32.691 [2024-07-15 12:56:48.490420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:32.691 [2024-07-15 12:56:48.490427] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:32.691 [2024-07-15 12:56:48.490431] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:32.691 [2024-07-15 12:56:48.490480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.490488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.490493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.691 [2024-07-15 12:56:48.490507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:32.691 [2024-07-15 12:56:48.490538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.691 [2024-07-15 12:56:48.498385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.691 [2024-07-15 12:56:48.498417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.691 [2024-07-15 12:56:48.498423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.691 [2024-07-15 12:56:48.498445] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:32.691 [2024-07-15 12:56:48.498455] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:32.691 [2024-07-15 12:56:48.498463] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:32.691 [2024-07-15 12:56:48.498489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.691 [2024-07-15 12:56:48.498509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.691 [2024-07-15 12:56:48.498539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.691 [2024-07-15 12:56:48.498595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.691 [2024-07-15 12:56:48.498603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.691 [2024-07-15 12:56:48.498607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.691 [2024-07-15 12:56:48.498618] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:32.691 [2024-07-15 12:56:48.498626] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:32.691 [2024-07-15 12:56:48.498635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.691 [2024-07-15 12:56:48.498652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.691 [2024-07-15 12:56:48.498672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.691 [2024-07-15 12:56:48.498724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.691 [2024-07-15 12:56:48.498731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.691 [2024-07-15 12:56:48.498735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.691 [2024-07-15 12:56:48.498746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:32.691 [2024-07-15 12:56:48.498756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:32.691 [2024-07-15 12:56:48.498763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.691 [2024-07-15 12:56:48.498779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.691 [2024-07-15 12:56:48.498798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.691 [2024-07-15 12:56:48.498850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.691 [2024-07-15 12:56:48.498857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.691 [2024-07-15 12:56:48.498861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.691 [2024-07-15 12:56:48.498871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:32.691 [2024-07-15 12:56:48.498882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.691 [2024-07-15 12:56:48.498899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.691 [2024-07-15 12:56:48.498917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.691 [2024-07-15 12:56:48.498968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.691 [2024-07-15 12:56:48.498976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.691 [2024-07-15 12:56:48.498979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.498984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.691 [2024-07-15 12:56:48.498989] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:32.691 [2024-07-15 12:56:48.498994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:32.691 [2024-07-15 12:56:48.499003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:32.691 [2024-07-15 12:56:48.499110] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:32.691 [2024-07-15 12:56:48.499124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:32.691 [2024-07-15 12:56:48.499135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.499140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.499144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.691 [2024-07-15 12:56:48.499152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.691 [2024-07-15 12:56:48.499172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.691 [2024-07-15 12:56:48.499220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.691 [2024-07-15 12:56:48.499227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.691 [2024-07-15 12:56:48.499232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.499236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.691 [2024-07-15 12:56:48.499242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:32.691 [2024-07-15 12:56:48.499253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.499258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.499262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.691 [2024-07-15 12:56:48.499269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.691 [2024-07-15 12:56:48.499287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.691 [2024-07-15 12:56:48.499333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.691 [2024-07-15 12:56:48.499340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.691 [2024-07-15 12:56:48.499344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.691 [2024-07-15 12:56:48.499348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.691 [2024-07-15 12:56:48.499353] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:32.692 [2024-07-15 12:56:48.499378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.499395] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:32.692 [2024-07-15 12:56:48.499411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.499423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.499437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.692 [2024-07-15 12:56:48.499461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.692 [2024-07-15 12:56:48.499563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.692 [2024-07-15 12:56:48.499571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.692 [2024-07-15 12:56:48.499575] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499579] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=4096, cccid=0 00:14:32.692 [2024-07-15 12:56:48.499585] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe01940) on tqpair(0xdc02c0): expected_datao=0, payload_size=4096 00:14:32.692 [2024-07-15 12:56:48.499590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499599] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499604] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.692 [2024-07-15 12:56:48.499619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.692 [2024-07-15 12:56:48.499623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.692 [2024-07-15 12:56:48.499637] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:32.692 [2024-07-15 12:56:48.499643] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:32.692 [2024-07-15 12:56:48.499648] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:32.692 [2024-07-15 12:56:48.499652] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:32.692 [2024-07-15 12:56:48.499657] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:32.692 [2024-07-15 12:56:48.499662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.499672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.499680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.499697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:32.692 [2024-07-15 12:56:48.499718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.692 [2024-07-15 12:56:48.499767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.692 [2024-07-15 12:56:48.499774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.692 [2024-07-15 12:56:48.499778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.692 [2024-07-15 12:56:48.499791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.499806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.692 [2024-07-15 12:56:48.499813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.499828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.692 [2024-07-15 12:56:48.499834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.499848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.692 [2024-07-15 12:56:48.499855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.499869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.692 [2024-07-15 12:56:48.499874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.499889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.499897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.499901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.499909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.692 [2024-07-15 12:56:48.499929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01940, cid 0, qid 0 00:14:32.692 [2024-07-15 12:56:48.499937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01ac0, cid 1, qid 0 00:14:32.692 [2024-07-15 12:56:48.499943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01c40, cid 2, qid 0 00:14:32.692 [2024-07-15 12:56:48.499948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.692 [2024-07-15 12:56:48.499953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01f40, cid 4, qid 0 00:14:32.692 [2024-07-15 12:56:48.500039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.692 [2024-07-15 12:56:48.500051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.692 [2024-07-15 12:56:48.500056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01f40) on tqpair=0xdc02c0 00:14:32.692 [2024-07-15 12:56:48.500066] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:32.692 [2024-07-15 12:56:48.500079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.500095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.500116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.500127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.500152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:32.692 [2024-07-15 12:56:48.500182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01f40, cid 4, qid 0 00:14:32.692 [2024-07-15 12:56:48.500235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.692 [2024-07-15 12:56:48.500248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.692 [2024-07-15 12:56:48.500255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01f40) on tqpair=0xdc02c0 00:14:32.692 [2024-07-15 12:56:48.500338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.500374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:32.692 [2024-07-15 12:56:48.500391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc02c0) 00:14:32.692 [2024-07-15 12:56:48.500411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.692 [2024-07-15 12:56:48.500458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01f40, cid 4, qid 0 00:14:32.692 [2024-07-15 12:56:48.500519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.692 [2024-07-15 12:56:48.500527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.692 [2024-07-15 12:56:48.500531] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500536] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=4096, cccid=4 00:14:32.692 [2024-07-15 12:56:48.500541] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe01f40) on tqpair(0xdc02c0): expected_datao=0, payload_size=4096 00:14:32.692 [2024-07-15 12:56:48.500546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500553] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500558] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.692 [2024-07-15 12:56:48.500567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.692 [2024-07-15 12:56:48.500573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.692 [2024-07-15 12:56:48.500577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01f40) on tqpair=0xdc02c0 00:14:32.693 [2024-07-15 12:56:48.500601] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:32.693 [2024-07-15 12:56:48.500614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.500634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.500643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc02c0) 00:14:32.693 [2024-07-15 12:56:48.500656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.693 [2024-07-15 12:56:48.500679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01f40, cid 4, qid 0 00:14:32.693 [2024-07-15 12:56:48.500746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.693 [2024-07-15 12:56:48.500753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.693 [2024-07-15 12:56:48.500758] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500762] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=4096, cccid=4 00:14:32.693 [2024-07-15 12:56:48.500767] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe01f40) on tqpair(0xdc02c0): expected_datao=0, payload_size=4096 00:14:32.693 [2024-07-15 12:56:48.500772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500779] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500783] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.693 [2024-07-15 12:56:48.500798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.693 [2024-07-15 12:56:48.500802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01f40) on tqpair=0xdc02c0 00:14:32.693 [2024-07-15 12:56:48.500823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.500835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.500844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc02c0) 00:14:32.693 [2024-07-15 12:56:48.500857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.693 [2024-07-15 12:56:48.500877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01f40, cid 4, qid 0 00:14:32.693 [2024-07-15 12:56:48.500941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.693 [2024-07-15 12:56:48.500948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.693 [2024-07-15 12:56:48.500952] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500956] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=4096, cccid=4 00:14:32.693 [2024-07-15 12:56:48.500961] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe01f40) on tqpair(0xdc02c0): expected_datao=0, payload_size=4096 00:14:32.693 [2024-07-15 12:56:48.500966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500973] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500977] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.500986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.693 [2024-07-15 12:56:48.500992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.693 [2024-07-15 12:56:48.500996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01f40) on tqpair=0xdc02c0 00:14:32.693 [2024-07-15 12:56:48.501011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.501021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.501033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.501040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.501046] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.501051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.501057] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:32.693 [2024-07-15 12:56:48.501062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:32.693 [2024-07-15 12:56:48.501068] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:32.693 [2024-07-15 12:56:48.501085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc02c0) 00:14:32.693 [2024-07-15 12:56:48.501098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.693 [2024-07-15 12:56:48.501106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc02c0) 00:14:32.693 [2024-07-15 12:56:48.501120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.693 [2024-07-15 12:56:48.501146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01f40, cid 4, qid 0 00:14:32.693 [2024-07-15 12:56:48.501154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe020c0, cid 5, qid 0 00:14:32.693 [2024-07-15 12:56:48.501218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.693 [2024-07-15 12:56:48.501225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.693 [2024-07-15 12:56:48.501229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01f40) on tqpair=0xdc02c0 00:14:32.693 [2024-07-15 12:56:48.501241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.693 [2024-07-15 12:56:48.501247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.693 [2024-07-15 12:56:48.501251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe020c0) on tqpair=0xdc02c0 00:14:32.693 [2024-07-15 12:56:48.501265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc02c0) 00:14:32.693 [2024-07-15 12:56:48.501278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.693 [2024-07-15 12:56:48.501295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe020c0, cid 5, qid 0 00:14:32.693 [2024-07-15 12:56:48.501341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.693 [2024-07-15 12:56:48.501349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.693 [2024-07-15 12:56:48.501353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe020c0) on tqpair=0xdc02c0 00:14:32.693 [2024-07-15 12:56:48.501390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc02c0) 00:14:32.693 [2024-07-15 12:56:48.501407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.693 [2024-07-15 12:56:48.501429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe020c0, cid 5, qid 0 00:14:32.693 [2024-07-15 12:56:48.501482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.693 [2024-07-15 12:56:48.501496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.693 [2024-07-15 12:56:48.501501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe020c0) on tqpair=0xdc02c0 00:14:32.693 [2024-07-15 12:56:48.501517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.693 [2024-07-15 12:56:48.501522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc02c0) 00:14:32.694 [2024-07-15 12:56:48.501529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.694 [2024-07-15 12:56:48.501548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe020c0, cid 5, qid 0 00:14:32.694 [2024-07-15 12:56:48.501595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.694 [2024-07-15 12:56:48.501602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.694 [2024-07-15 12:56:48.501606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe020c0) on tqpair=0xdc02c0 00:14:32.694 [2024-07-15 12:56:48.501631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc02c0) 00:14:32.694 [2024-07-15 12:56:48.501645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.694 [2024-07-15 12:56:48.501653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc02c0) 00:14:32.694 [2024-07-15 12:56:48.501664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.694 [2024-07-15 12:56:48.501672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xdc02c0) 00:14:32.694 [2024-07-15 12:56:48.501683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.694 [2024-07-15 12:56:48.501695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdc02c0) 00:14:32.694 [2024-07-15 12:56:48.501707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.694 [2024-07-15 12:56:48.501727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe020c0, cid 5, qid 0 00:14:32.694 [2024-07-15 12:56:48.501734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01f40, cid 4, qid 0 00:14:32.694 [2024-07-15 12:56:48.501740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02240, cid 6, qid 0 00:14:32.694 [2024-07-15 12:56:48.501745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe023c0, cid 7, qid 0 00:14:32.694 [2024-07-15 12:56:48.501881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.694 [2024-07-15 12:56:48.501898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.694 [2024-07-15 12:56:48.501903] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501907] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=8192, cccid=5 00:14:32.694 [2024-07-15 12:56:48.501912] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe020c0) on tqpair(0xdc02c0): expected_datao=0, payload_size=8192 00:14:32.694 [2024-07-15 12:56:48.501917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501935] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501941] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.694 [2024-07-15 12:56:48.501953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.694 [2024-07-15 12:56:48.501957] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501961] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=512, cccid=4 00:14:32.694 [2024-07-15 12:56:48.501966] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe01f40) on tqpair(0xdc02c0): expected_datao=0, payload_size=512 00:14:32.694 [2024-07-15 12:56:48.501970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501977] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501981] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.501987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.694 [2024-07-15 12:56:48.501993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.694 [2024-07-15 12:56:48.501997] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502000] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=512, cccid=6 00:14:32.694 [2024-07-15 12:56:48.502005] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe02240) on tqpair(0xdc02c0): expected_datao=0, payload_size=512 00:14:32.694 [2024-07-15 12:56:48.502010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502016] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502020] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:32.694 [2024-07-15 12:56:48.502033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:32.694 [2024-07-15 12:56:48.502037] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502040] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc02c0): datao=0, datal=4096, cccid=7 00:14:32.694 [2024-07-15 12:56:48.502045] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe023c0) on tqpair(0xdc02c0): expected_datao=0, payload_size=4096 00:14:32.694 [2024-07-15 12:56:48.502050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502057] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502061] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.694 [2024-07-15 12:56:48.502073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.694 [2024-07-15 12:56:48.502077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe020c0) on tqpair=0xdc02c0 00:14:32.694 [2024-07-15 12:56:48.502101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.694 [2024-07-15 12:56:48.502109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.694 [2024-07-15 12:56:48.502113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01f40) on tqpair=0xdc02c0 00:14:32.694 [2024-07-15 12:56:48.502130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.694 [2024-07-15 12:56:48.502137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.694 [2024-07-15 12:56:48.502141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02240) on tqpair=0xdc02c0 00:14:32.694 [2024-07-15 12:56:48.502153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.694 [2024-07-15 12:56:48.502159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.694 [2024-07-15 12:56:48.502163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.694 [2024-07-15 12:56:48.502167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe023c0) on tqpair=0xdc02c0 00:14:32.694 ===================================================== 00:14:32.694 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.694 ===================================================== 00:14:32.694 Controller Capabilities/Features 00:14:32.694 ================================ 00:14:32.694 Vendor ID: 8086 00:14:32.694 Subsystem Vendor ID: 8086 00:14:32.694 Serial Number: SPDK00000000000001 00:14:32.694 Model Number: SPDK bdev Controller 00:14:32.694 Firmware Version: 24.09 00:14:32.694 Recommended Arb Burst: 6 00:14:32.694 IEEE OUI Identifier: e4 d2 5c 00:14:32.694 Multi-path I/O 00:14:32.694 May have multiple subsystem ports: Yes 00:14:32.694 May have multiple controllers: Yes 00:14:32.694 Associated with SR-IOV VF: No 00:14:32.694 Max Data Transfer Size: 131072 00:14:32.694 Max Number of Namespaces: 32 00:14:32.694 Max Number of I/O Queues: 127 00:14:32.694 NVMe Specification Version (VS): 1.3 00:14:32.694 NVMe Specification Version (Identify): 1.3 00:14:32.694 Maximum Queue Entries: 128 00:14:32.694 Contiguous Queues Required: Yes 00:14:32.694 Arbitration Mechanisms Supported 00:14:32.694 Weighted Round Robin: Not Supported 00:14:32.694 Vendor Specific: Not Supported 00:14:32.694 Reset Timeout: 15000 ms 00:14:32.694 Doorbell Stride: 4 bytes 00:14:32.694 NVM Subsystem Reset: Not Supported 00:14:32.694 Command Sets Supported 00:14:32.694 NVM Command Set: Supported 00:14:32.694 Boot Partition: Not Supported 00:14:32.694 Memory Page Size Minimum: 4096 bytes 00:14:32.694 Memory Page Size Maximum: 4096 bytes 00:14:32.694 Persistent Memory Region: Not Supported 00:14:32.694 Optional Asynchronous Events Supported 00:14:32.694 Namespace Attribute Notices: Supported 00:14:32.694 Firmware Activation Notices: Not Supported 00:14:32.694 ANA Change Notices: Not Supported 00:14:32.694 PLE Aggregate Log Change Notices: Not Supported 00:14:32.694 LBA Status Info Alert Notices: Not Supported 00:14:32.694 EGE Aggregate Log Change Notices: Not Supported 00:14:32.694 Normal NVM Subsystem Shutdown event: Not Supported 00:14:32.694 Zone Descriptor Change Notices: Not Supported 00:14:32.694 Discovery Log Change Notices: Not Supported 00:14:32.694 Controller Attributes 00:14:32.694 128-bit Host Identifier: Supported 00:14:32.694 Non-Operational Permissive Mode: Not Supported 00:14:32.694 NVM Sets: Not Supported 00:14:32.694 Read Recovery Levels: Not Supported 00:14:32.694 Endurance Groups: Not Supported 00:14:32.694 Predictable Latency Mode: Not Supported 00:14:32.694 Traffic Based Keep ALive: Not Supported 00:14:32.694 Namespace Granularity: Not Supported 00:14:32.694 SQ Associations: Not Supported 00:14:32.694 UUID List: Not Supported 00:14:32.694 Multi-Domain Subsystem: Not Supported 00:14:32.694 Fixed Capacity Management: Not Supported 00:14:32.694 Variable Capacity Management: Not Supported 00:14:32.694 Delete Endurance Group: Not Supported 00:14:32.694 Delete NVM Set: Not Supported 00:14:32.694 Extended LBA Formats Supported: Not Supported 00:14:32.694 Flexible Data Placement Supported: Not Supported 00:14:32.694 00:14:32.695 Controller Memory Buffer Support 00:14:32.695 ================================ 00:14:32.695 Supported: No 00:14:32.695 00:14:32.695 Persistent Memory Region Support 00:14:32.695 ================================ 00:14:32.695 Supported: No 00:14:32.695 00:14:32.695 Admin Command Set Attributes 00:14:32.695 ============================ 00:14:32.695 Security Send/Receive: Not Supported 00:14:32.695 Format NVM: Not Supported 00:14:32.695 Firmware Activate/Download: Not Supported 00:14:32.695 Namespace Management: Not Supported 00:14:32.695 Device Self-Test: Not Supported 00:14:32.695 Directives: Not Supported 00:14:32.695 NVMe-MI: Not Supported 00:14:32.695 Virtualization Management: Not Supported 00:14:32.695 Doorbell Buffer Config: Not Supported 00:14:32.695 Get LBA Status Capability: Not Supported 00:14:32.695 Command & Feature Lockdown Capability: Not Supported 00:14:32.695 Abort Command Limit: 4 00:14:32.695 Async Event Request Limit: 4 00:14:32.695 Number of Firmware Slots: N/A 00:14:32.695 Firmware Slot 1 Read-Only: N/A 00:14:32.695 Firmware Activation Without Reset: N/A 00:14:32.695 Multiple Update Detection Support: N/A 00:14:32.695 Firmware Update Granularity: No Information Provided 00:14:32.695 Per-Namespace SMART Log: No 00:14:32.695 Asymmetric Namespace Access Log Page: Not Supported 00:14:32.695 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:32.695 Command Effects Log Page: Supported 00:14:32.695 Get Log Page Extended Data: Supported 00:14:32.695 Telemetry Log Pages: Not Supported 00:14:32.695 Persistent Event Log Pages: Not Supported 00:14:32.695 Supported Log Pages Log Page: May Support 00:14:32.695 Commands Supported & Effects Log Page: Not Supported 00:14:32.695 Feature Identifiers & Effects Log Page:May Support 00:14:32.695 NVMe-MI Commands & Effects Log Page: May Support 00:14:32.695 Data Area 4 for Telemetry Log: Not Supported 00:14:32.695 Error Log Page Entries Supported: 128 00:14:32.695 Keep Alive: Supported 00:14:32.695 Keep Alive Granularity: 10000 ms 00:14:32.695 00:14:32.695 NVM Command Set Attributes 00:14:32.695 ========================== 00:14:32.695 Submission Queue Entry Size 00:14:32.695 Max: 64 00:14:32.695 Min: 64 00:14:32.695 Completion Queue Entry Size 00:14:32.695 Max: 16 00:14:32.695 Min: 16 00:14:32.695 Number of Namespaces: 32 00:14:32.695 Compare Command: Supported 00:14:32.695 Write Uncorrectable Command: Not Supported 00:14:32.695 Dataset Management Command: Supported 00:14:32.695 Write Zeroes Command: Supported 00:14:32.695 Set Features Save Field: Not Supported 00:14:32.695 Reservations: Supported 00:14:32.695 Timestamp: Not Supported 00:14:32.695 Copy: Supported 00:14:32.695 Volatile Write Cache: Present 00:14:32.695 Atomic Write Unit (Normal): 1 00:14:32.695 Atomic Write Unit (PFail): 1 00:14:32.695 Atomic Compare & Write Unit: 1 00:14:32.695 Fused Compare & Write: Supported 00:14:32.695 Scatter-Gather List 00:14:32.695 SGL Command Set: Supported 00:14:32.695 SGL Keyed: Supported 00:14:32.695 SGL Bit Bucket Descriptor: Not Supported 00:14:32.695 SGL Metadata Pointer: Not Supported 00:14:32.695 Oversized SGL: Not Supported 00:14:32.695 SGL Metadata Address: Not Supported 00:14:32.695 SGL Offset: Supported 00:14:32.695 Transport SGL Data Block: Not Supported 00:14:32.695 Replay Protected Memory Block: Not Supported 00:14:32.695 00:14:32.695 Firmware Slot Information 00:14:32.695 ========================= 00:14:32.695 Active slot: 1 00:14:32.695 Slot 1 Firmware Revision: 24.09 00:14:32.695 00:14:32.695 00:14:32.695 Commands Supported and Effects 00:14:32.695 ============================== 00:14:32.695 Admin Commands 00:14:32.695 -------------- 00:14:32.695 Get Log Page (02h): Supported 00:14:32.695 Identify (06h): Supported 00:14:32.695 Abort (08h): Supported 00:14:32.695 Set Features (09h): Supported 00:14:32.695 Get Features (0Ah): Supported 00:14:32.695 Asynchronous Event Request (0Ch): Supported 00:14:32.695 Keep Alive (18h): Supported 00:14:32.695 I/O Commands 00:14:32.695 ------------ 00:14:32.695 Flush (00h): Supported LBA-Change 00:14:32.695 Write (01h): Supported LBA-Change 00:14:32.695 Read (02h): Supported 00:14:32.695 Compare (05h): Supported 00:14:32.695 Write Zeroes (08h): Supported LBA-Change 00:14:32.695 Dataset Management (09h): Supported LBA-Change 00:14:32.695 Copy (19h): Supported LBA-Change 00:14:32.695 00:14:32.695 Error Log 00:14:32.695 ========= 00:14:32.695 00:14:32.695 Arbitration 00:14:32.695 =========== 00:14:32.695 Arbitration Burst: 1 00:14:32.695 00:14:32.695 Power Management 00:14:32.695 ================ 00:14:32.695 Number of Power States: 1 00:14:32.695 Current Power State: Power State #0 00:14:32.695 Power State #0: 00:14:32.695 Max Power: 0.00 W 00:14:32.695 Non-Operational State: Operational 00:14:32.695 Entry Latency: Not Reported 00:14:32.695 Exit Latency: Not Reported 00:14:32.695 Relative Read Throughput: 0 00:14:32.695 Relative Read Latency: 0 00:14:32.695 Relative Write Throughput: 0 00:14:32.695 Relative Write Latency: 0 00:14:32.695 Idle Power: Not Reported 00:14:32.695 Active Power: Not Reported 00:14:32.695 Non-Operational Permissive Mode: Not Supported 00:14:32.695 00:14:32.695 Health Information 00:14:32.695 ================== 00:14:32.695 Critical Warnings: 00:14:32.695 Available Spare Space: OK 00:14:32.695 Temperature: OK 00:14:32.695 Device Reliability: OK 00:14:32.695 Read Only: No 00:14:32.695 Volatile Memory Backup: OK 00:14:32.695 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:32.695 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:32.695 Available Spare: 0% 00:14:32.695 Available Spare Threshold: 0% 00:14:32.695 Life Percentage Used:[2024-07-15 12:56:48.502277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.695 [2024-07-15 12:56:48.502284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdc02c0) 00:14:32.695 [2024-07-15 12:56:48.502293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.695 [2024-07-15 12:56:48.502317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe023c0, cid 7, qid 0 00:14:32.695 [2024-07-15 12:56:48.502385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.695 [2024-07-15 12:56:48.502398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.695 [2024-07-15 12:56:48.502403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.695 [2024-07-15 12:56:48.502407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe023c0) on tqpair=0xdc02c0 00:14:32.695 [2024-07-15 12:56:48.502448] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:32.695 [2024-07-15 12:56:48.502461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01940) on tqpair=0xdc02c0 00:14:32.695 [2024-07-15 12:56:48.502468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.695 [2024-07-15 12:56:48.502474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01ac0) on tqpair=0xdc02c0 00:14:32.695 [2024-07-15 12:56:48.502478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.695 [2024-07-15 12:56:48.502484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01c40) on tqpair=0xdc02c0 00:14:32.695 [2024-07-15 12:56:48.502489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.695 [2024-07-15 12:56:48.502494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.695 [2024-07-15 12:56:48.502499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.695 [2024-07-15 12:56:48.502509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.695 [2024-07-15 12:56:48.502514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.695 [2024-07-15 12:56:48.502518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.695 [2024-07-15 12:56:48.502526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.695 [2024-07-15 12:56:48.502551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.695 [2024-07-15 12:56:48.502597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.695 [2024-07-15 12:56:48.502605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.502609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.502621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.502637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.502658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.502723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.502736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.502741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.502750] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:32.696 [2024-07-15 12:56:48.502756] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:32.696 [2024-07-15 12:56:48.502767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.502784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.502803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.502851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.502858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.502862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.502877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.502893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.502910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.502957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.502969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.502973] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.502989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.502997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.503005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.503022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.503074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.503081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.503084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.503099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.503116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.503133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.503179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.503190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.503194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.503210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.503227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.503244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.503295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.503302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.503306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.503321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.503337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.503353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.503416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.503425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.503429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.503445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.503461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.503482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.503528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.503540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.503545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.503560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.696 [2024-07-15 12:56:48.503577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.696 [2024-07-15 12:56:48.503595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.696 [2024-07-15 12:56:48.503641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.696 [2024-07-15 12:56:48.503648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.696 [2024-07-15 12:56:48.503652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.696 [2024-07-15 12:56:48.503656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.696 [2024-07-15 12:56:48.503667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.503683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.503700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.503751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.503762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.503766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.503782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.503798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.503815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.503858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.503865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.503869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.503884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.503900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.503917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.503965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.503972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.503976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.503990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.503999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.504899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.504916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.504964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.697 [2024-07-15 12:56:48.504970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.697 [2024-07-15 12:56:48.504974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.697 [2024-07-15 12:56:48.504989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.697 [2024-07-15 12:56:48.504998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.697 [2024-07-15 12:56:48.505006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.697 [2024-07-15 12:56:48.505022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.697 [2024-07-15 12:56:48.505070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.505896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.505944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.505955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.505960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.505975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.505984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.505992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.506009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.506058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.506069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.506074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.506089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.506105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.506122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.506167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.506174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.506178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.506193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.506208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.506225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.506273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.506284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.506288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.506304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.506313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.506320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.698 [2024-07-15 12:56:48.506337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.698 [2024-07-15 12:56:48.510379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.698 [2024-07-15 12:56:48.510401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.698 [2024-07-15 12:56:48.510406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.510411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.698 [2024-07-15 12:56:48.510426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.510431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:32.698 [2024-07-15 12:56:48.510435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc02c0) 00:14:32.698 [2024-07-15 12:56:48.510444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:32.699 [2024-07-15 12:56:48.510470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe01dc0, cid 3, qid 0 00:14:32.699 [2024-07-15 12:56:48.510518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:32.699 [2024-07-15 12:56:48.510525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:32.699 [2024-07-15 12:56:48.510529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:32.699 [2024-07-15 12:56:48.510533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe01dc0) on tqpair=0xdc02c0 00:14:32.699 [2024-07-15 12:56:48.510541] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:32.699 0% 00:14:32.699 Data Units Read: 0 00:14:32.699 Data Units Written: 0 00:14:32.699 Host Read Commands: 0 00:14:32.699 Host Write Commands: 0 00:14:32.699 Controller Busy Time: 0 minutes 00:14:32.699 Power Cycles: 0 00:14:32.699 Power On Hours: 0 hours 00:14:32.699 Unsafe Shutdowns: 0 00:14:32.699 Unrecoverable Media Errors: 0 00:14:32.699 Lifetime Error Log Entries: 0 00:14:32.699 Warning Temperature Time: 0 minutes 00:14:32.699 Critical Temperature Time: 0 minutes 00:14:32.699 00:14:32.699 Number of Queues 00:14:32.699 ================ 00:14:32.699 Number of I/O Submission Queues: 127 00:14:32.699 Number of I/O Completion Queues: 127 00:14:32.699 00:14:32.699 Active Namespaces 00:14:32.699 ================= 00:14:32.699 Namespace ID:1 00:14:32.699 Error Recovery Timeout: Unlimited 00:14:32.699 Command Set Identifier: NVM (00h) 00:14:32.699 Deallocate: Supported 00:14:32.699 Deallocated/Unwritten Error: Not Supported 00:14:32.699 Deallocated Read Value: Unknown 00:14:32.699 Deallocate in Write Zeroes: Not Supported 00:14:32.699 Deallocated Guard Field: 0xFFFF 00:14:32.699 Flush: Supported 00:14:32.699 Reservation: Supported 00:14:32.699 Namespace Sharing Capabilities: Multiple Controllers 00:14:32.699 Size (in LBAs): 131072 (0GiB) 00:14:32.699 Capacity (in LBAs): 131072 (0GiB) 00:14:32.699 Utilization (in LBAs): 131072 (0GiB) 00:14:32.699 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:32.699 EUI64: ABCDEF0123456789 00:14:32.699 UUID: 332f648a-b5a4-4192-8529-38a0f659a7f4 00:14:32.699 Thin Provisioning: Not Supported 00:14:32.699 Per-NS Atomic Units: Yes 00:14:32.699 Atomic Boundary Size (Normal): 0 00:14:32.699 Atomic Boundary Size (PFail): 0 00:14:32.699 Atomic Boundary Offset: 0 00:14:32.699 Maximum Single Source Range Length: 65535 00:14:32.699 Maximum Copy Length: 65535 00:14:32.699 Maximum Source Range Count: 1 00:14:32.699 NGUID/EUI64 Never Reused: No 00:14:32.699 Namespace Write Protected: No 00:14:32.699 Number of LBA Formats: 1 00:14:32.699 Current LBA Format: LBA Format #00 00:14:32.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:32.699 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.699 rmmod nvme_tcp 00:14:32.699 rmmod nvme_fabrics 00:14:32.699 rmmod nvme_keyring 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74805 ']' 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74805 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74805 ']' 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74805 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74805 00:14:32.699 killing process with pid 74805 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74805' 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74805 00:14:32.699 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74805 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:32.958 00:14:32.958 real 0m2.468s 00:14:32.958 user 0m7.004s 00:14:32.958 sys 0m0.584s 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.958 12:56:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.958 ************************************ 00:14:32.958 END TEST nvmf_identify 00:14:32.958 ************************************ 00:14:32.958 12:56:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:32.958 12:56:48 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:32.958 12:56:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:32.958 12:56:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.958 12:56:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.958 ************************************ 00:14:32.958 START TEST nvmf_perf 00:14:32.958 ************************************ 00:14:32.958 12:56:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:33.217 * Looking for test storage... 00:14:33.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.217 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:33.218 Cannot find device "nvmf_tgt_br" 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.218 Cannot find device "nvmf_tgt_br2" 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:33.218 Cannot find device "nvmf_tgt_br" 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:33.218 Cannot find device "nvmf_tgt_br2" 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:33.218 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:33.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:14:33.477 00:14:33.477 --- 10.0.0.2 ping statistics --- 00:14:33.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.477 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:33.477 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:33.477 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:33.477 00:14:33.477 --- 10.0.0.3 ping statistics --- 00:14:33.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.477 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:33.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:33.477 00:14:33.477 --- 10.0.0.1 ping statistics --- 00:14:33.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.477 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75008 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75008 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75008 ']' 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.477 12:56:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:33.477 [2024-07-15 12:56:49.412123] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:33.477 [2024-07-15 12:56:49.412213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.735 [2024-07-15 12:56:49.556353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.735 [2024-07-15 12:56:49.656856] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.735 [2024-07-15 12:56:49.657747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.735 [2024-07-15 12:56:49.657926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.735 [2024-07-15 12:56:49.658109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.735 [2024-07-15 12:56:49.658251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.735 [2024-07-15 12:56:49.658456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.735 [2024-07-15 12:56:49.658558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.735 [2024-07-15 12:56:49.658953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.735 [2024-07-15 12:56:49.658964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.735 [2024-07-15 12:56:49.710616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.303 12:56:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.303 12:56:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:14:34.303 12:56:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.303 12:56:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.303 12:56:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:34.562 12:56:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.562 12:56:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:34.562 12:56:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:34.821 12:56:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:34.821 12:56:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:35.080 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:35.080 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:35.339 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:35.339 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:35.339 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:35.339 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:35.339 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:35.597 [2024-07-15 12:56:51.558828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.597 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.855 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:35.855 12:56:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.113 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:36.113 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:36.371 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.629 [2024-07-15 12:56:52.512039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.629 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.888 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:36.888 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:36.888 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:36.888 12:56:52 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:37.823 Initializing NVMe Controllers 00:14:37.823 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:37.823 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:37.823 Initialization complete. Launching workers. 00:14:37.823 ======================================================== 00:14:37.823 Latency(us) 00:14:37.823 Device Information : IOPS MiB/s Average min max 00:14:37.823 PCIE (0000:00:10.0) NSID 1 from core 0: 26232.18 102.47 1224.05 317.80 8170.92 00:14:37.823 ======================================================== 00:14:37.823 Total : 26232.18 102.47 1224.05 317.80 8170.92 00:14:37.823 00:14:38.081 12:56:53 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:39.458 Initializing NVMe Controllers 00:14:39.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:39.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:39.458 Initialization complete. Launching workers. 00:14:39.458 ======================================================== 00:14:39.458 Latency(us) 00:14:39.458 Device Information : IOPS MiB/s Average min max 00:14:39.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3793.00 14.82 263.31 105.62 4383.86 00:14:39.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.43 7955.63 12029.96 00:14:39.458 ======================================================== 00:14:39.458 Total : 3917.00 15.30 512.30 105.62 12029.96 00:14:39.458 00:14:39.458 12:56:55 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:40.832 Initializing NVMe Controllers 00:14:40.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:40.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:40.832 Initialization complete. Launching workers. 00:14:40.832 ======================================================== 00:14:40.832 Latency(us) 00:14:40.832 Device Information : IOPS MiB/s Average min max 00:14:40.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8709.12 34.02 3674.76 567.06 7764.13 00:14:40.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3969.73 15.51 8092.30 6523.81 16520.36 00:14:40.832 ======================================================== 00:14:40.832 Total : 12678.85 49.53 5057.89 567.06 16520.36 00:14:40.832 00:14:40.832 12:56:56 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:40.832 12:56:56 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:43.357 Initializing NVMe Controllers 00:14:43.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.357 Controller IO queue size 128, less than required. 00:14:43.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.357 Controller IO queue size 128, less than required. 00:14:43.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:43.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:43.357 Initialization complete. Launching workers. 00:14:43.357 ======================================================== 00:14:43.357 Latency(us) 00:14:43.357 Device Information : IOPS MiB/s Average min max 00:14:43.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1770.98 442.75 73292.77 35831.94 128872.42 00:14:43.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 665.99 166.50 198832.82 77476.64 323979.24 00:14:43.357 ======================================================== 00:14:43.357 Total : 2436.98 609.24 107601.21 35831.94 323979.24 00:14:43.357 00:14:43.357 12:56:59 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:43.615 Initializing NVMe Controllers 00:14:43.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.615 Controller IO queue size 128, less than required. 00:14:43.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.615 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:43.615 Controller IO queue size 128, less than required. 00:14:43.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.615 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:43.615 WARNING: Some requested NVMe devices were skipped 00:14:43.615 No valid NVMe controllers or AIO or URING devices found 00:14:43.615 12:56:59 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:46.145 Initializing NVMe Controllers 00:14:46.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.145 Controller IO queue size 128, less than required. 00:14:46.145 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.145 Controller IO queue size 128, less than required. 00:14:46.145 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:46.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:46.145 Initialization complete. Launching workers. 00:14:46.145 00:14:46.145 ==================== 00:14:46.145 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:46.145 TCP transport: 00:14:46.145 polls: 10610 00:14:46.145 idle_polls: 5590 00:14:46.145 sock_completions: 5020 00:14:46.145 nvme_completions: 6689 00:14:46.145 submitted_requests: 10022 00:14:46.145 queued_requests: 1 00:14:46.145 00:14:46.145 ==================== 00:14:46.145 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:46.145 TCP transport: 00:14:46.145 polls: 13043 00:14:46.145 idle_polls: 8848 00:14:46.145 sock_completions: 4195 00:14:46.145 nvme_completions: 6855 00:14:46.145 submitted_requests: 10352 00:14:46.145 queued_requests: 1 00:14:46.145 ======================================================== 00:14:46.145 Latency(us) 00:14:46.145 Device Information : IOPS MiB/s Average min max 00:14:46.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1668.41 417.10 78415.61 41430.59 122913.84 00:14:46.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1709.82 427.45 75168.61 26492.42 122202.20 00:14:46.145 ======================================================== 00:14:46.145 Total : 3378.22 844.56 76772.21 26492.42 122913.84 00:14:46.145 00:14:46.145 12:57:01 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:46.145 12:57:01 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.403 rmmod nvme_tcp 00:14:46.403 rmmod nvme_fabrics 00:14:46.403 rmmod nvme_keyring 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75008 ']' 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75008 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75008 ']' 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75008 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75008 00:14:46.403 killing process with pid 75008 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75008' 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75008 00:14:46.403 12:57:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75008 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:47.340 00:14:47.340 real 0m14.162s 00:14:47.340 user 0m51.544s 00:14:47.340 sys 0m3.935s 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.340 12:57:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:47.340 ************************************ 00:14:47.340 END TEST nvmf_perf 00:14:47.340 ************************************ 00:14:47.340 12:57:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:47.340 12:57:03 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:47.340 12:57:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:47.340 12:57:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.340 12:57:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:47.340 ************************************ 00:14:47.340 START TEST nvmf_fio_host 00:14:47.340 ************************************ 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:47.340 * Looking for test storage... 00:14:47.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:47.340 Cannot find device "nvmf_tgt_br" 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.340 Cannot find device "nvmf_tgt_br2" 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:47.340 Cannot find device "nvmf_tgt_br" 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:47.340 Cannot find device "nvmf_tgt_br2" 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:47.340 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.341 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:47.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:14:47.599 00:14:47.599 --- 10.0.0.2 ping statistics --- 00:14:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.599 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:47.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:47.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:47.599 00:14:47.599 --- 10.0.0.3 ping statistics --- 00:14:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.599 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:47.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:47.599 00:14:47.599 --- 10.0.0.1 ping statistics --- 00:14:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.599 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75412 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75412 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75412 ']' 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.599 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.600 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.600 12:57:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:47.600 [2024-07-15 12:57:03.631453] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:47.600 [2024-07-15 12:57:03.631521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.857 [2024-07-15 12:57:03.763257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.857 [2024-07-15 12:57:03.888206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.857 [2024-07-15 12:57:03.888266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.857 [2024-07-15 12:57:03.888284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.857 [2024-07-15 12:57:03.888296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.858 [2024-07-15 12:57:03.888308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.858 [2024-07-15 12:57:03.888506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.858 [2024-07-15 12:57:03.888647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.858 [2024-07-15 12:57:03.889390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.858 [2024-07-15 12:57:03.889395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.120 [2024-07-15 12:57:03.951189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.687 12:57:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.687 12:57:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:48.687 12:57:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.946 [2024-07-15 12:57:04.884334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.946 12:57:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:48.946 12:57:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.946 12:57:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:48.946 12:57:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:49.512 Malloc1 00:14:49.512 12:57:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:49.770 12:57:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.028 12:57:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.028 [2024-07-15 12:57:06.055138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.028 12:57:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.286 12:57:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:50.286 12:57:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:50.287 12:57:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:50.545 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:50.546 fio-3.35 00:14:50.546 Starting 1 thread 00:14:53.077 00:14:53.077 test: (groupid=0, jobs=1): err= 0: pid=75493: Mon Jul 15 12:57:08 2024 00:14:53.077 read: IOPS=8664, BW=33.8MiB/s (35.5MB/s)(67.9MiB/2006msec) 00:14:53.077 slat (usec): min=2, max=266, avg= 2.42, stdev= 2.41 00:14:53.077 clat (usec): min=1571, max=17196, avg=7705.29, stdev=968.93 00:14:53.077 lat (usec): min=1606, max=17198, avg=7707.71, stdev=968.82 00:14:53.077 clat percentiles (usec): 00:14:53.077 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:14:53.077 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7701], 00:14:53.077 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9372], 00:14:53.077 | 99.00th=[11731], 99.50th=[12780], 99.90th=[15926], 99.95th=[16450], 00:14:53.077 | 99.99th=[17171] 00:14:53.077 bw ( KiB/s): min=32352, max=35800, per=99.91%, avg=34628.00, stdev=1573.55, samples=4 00:14:53.077 iops : min= 8088, max= 8950, avg=8657.00, stdev=393.39, samples=4 00:14:53.077 write: IOPS=8656, BW=33.8MiB/s (35.5MB/s)(67.8MiB/2006msec); 0 zone resets 00:14:53.077 slat (usec): min=2, max=170, avg= 2.53, stdev= 1.52 00:14:53.077 clat (usec): min=1469, max=16076, avg=7020.93, stdev=891.65 00:14:53.077 lat (usec): min=1478, max=16078, avg=7023.46, stdev=891.64 00:14:53.077 clat percentiles (usec): 00:14:53.077 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:14:53.077 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6849], 60.00th=[ 6980], 00:14:53.077 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7963], 95.00th=[ 8586], 00:14:53.077 | 99.00th=[10421], 99.50th=[11863], 99.90th=[14484], 99.95th=[15926], 00:14:53.077 | 99.99th=[16057] 00:14:53.077 bw ( KiB/s): min=32368, max=35664, per=99.95%, avg=34610.00, stdev=1554.75, samples=4 00:14:53.077 iops : min= 8092, max= 8916, avg=8652.50, stdev=388.69, samples=4 00:14:53.077 lat (msec) : 2=0.04%, 4=0.18%, 10=98.14%, 20=1.64% 00:14:53.077 cpu : usr=69.43%, sys=22.89%, ctx=22, majf=0, minf=7 00:14:53.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:53.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:53.077 issued rwts: total=17381,17365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:53.077 00:14:53.077 Run status group 0 (all jobs): 00:14:53.077 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=67.9MiB (71.2MB), run=2006-2006msec 00:14:53.077 WRITE: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=67.8MiB (71.1MB), run=2006-2006msec 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:53.077 12:57:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:53.077 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:53.077 fio-3.35 00:14:53.077 Starting 1 thread 00:14:55.608 00:14:55.608 test: (groupid=0, jobs=1): err= 0: pid=75542: Mon Jul 15 12:57:11 2024 00:14:55.608 read: IOPS=8263, BW=129MiB/s (135MB/s)(259MiB/2009msec) 00:14:55.608 slat (usec): min=3, max=125, avg= 3.84, stdev= 1.91 00:14:55.608 clat (usec): min=2674, max=18082, avg=8514.66, stdev=2848.86 00:14:55.608 lat (usec): min=2677, max=18085, avg=8518.50, stdev=2848.93 00:14:55.608 clat percentiles (usec): 00:14:55.608 | 1.00th=[ 3982], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5932], 00:14:55.608 | 30.00th=[ 6652], 40.00th=[ 7373], 50.00th=[ 8029], 60.00th=[ 8848], 00:14:55.608 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[12518], 95.00th=[14353], 00:14:55.608 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17957], 99.95th=[17957], 00:14:55.608 | 99.99th=[17957] 00:14:55.608 bw ( KiB/s): min=62208, max=79328, per=52.19%, avg=69000.00, stdev=7320.58, samples=4 00:14:55.608 iops : min= 3888, max= 4958, avg=4312.50, stdev=457.54, samples=4 00:14:55.608 write: IOPS=4901, BW=76.6MiB/s (80.3MB/s)(141MiB/1841msec); 0 zone resets 00:14:55.608 slat (usec): min=33, max=331, avg=39.08, stdev= 7.21 00:14:55.608 clat (usec): min=5964, max=21342, avg=12126.87, stdev=2113.05 00:14:55.608 lat (usec): min=6001, max=21379, avg=12165.95, stdev=2114.08 00:14:55.608 clat percentiles (usec): 00:14:55.608 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:14:55.608 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:14:55.608 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15008], 95.00th=[15795], 00:14:55.608 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19006], 99.95th=[20841], 00:14:55.608 | 99.99th=[21365] 00:14:55.608 bw ( KiB/s): min=64224, max=82304, per=91.42%, avg=71696.00, stdev=7700.62, samples=4 00:14:55.608 iops : min= 4014, max= 5144, avg=4481.00, stdev=481.29, samples=4 00:14:55.608 lat (msec) : 4=0.69%, 10=51.08%, 20=48.21%, 50=0.02% 00:14:55.608 cpu : usr=82.02%, sys=13.05%, ctx=6, majf=0, minf=12 00:14:55.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:55.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.608 issued rwts: total=16601,9024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.608 00:14:55.608 Run status group 0 (all jobs): 00:14:55.608 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2009-2009msec 00:14:55.608 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=141MiB (148MB), run=1841-1841msec 00:14:55.608 12:57:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.608 12:57:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:55.608 12:57:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:55.608 12:57:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.609 rmmod nvme_tcp 00:14:55.609 rmmod nvme_fabrics 00:14:55.609 rmmod nvme_keyring 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75412 ']' 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75412 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75412 ']' 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75412 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75412 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.609 killing process with pid 75412 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75412' 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75412 00:14:55.609 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75412 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:55.869 00:14:55.869 real 0m8.745s 00:14:55.869 user 0m36.149s 00:14:55.869 sys 0m2.260s 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.869 12:57:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:55.869 ************************************ 00:14:55.869 END TEST nvmf_fio_host 00:14:55.869 ************************************ 00:14:56.129 12:57:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:56.129 12:57:11 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:56.129 12:57:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.129 12:57:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.129 12:57:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.129 ************************************ 00:14:56.129 START TEST nvmf_failover 00:14:56.129 ************************************ 00:14:56.129 12:57:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:56.129 * Looking for test storage... 00:14:56.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:14:56.129 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:56.130 Cannot find device "nvmf_tgt_br" 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.130 Cannot find device "nvmf_tgt_br2" 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:56.130 Cannot find device "nvmf_tgt_br" 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:56.130 Cannot find device "nvmf_tgt_br2" 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.130 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:56.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:56.400 00:14:56.400 --- 10.0.0.2 ping statistics --- 00:14:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.400 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:56.400 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:56.400 00:14:56.400 --- 10.0.0.3 ping statistics --- 00:14:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.400 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:56.400 00:14:56.400 --- 10.0.0.1 ping statistics --- 00:14:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.400 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75758 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75758 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75758 ']' 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.400 12:57:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:56.400 [2024-07-15 12:57:12.426814] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:56.400 [2024-07-15 12:57:12.426907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.659 [2024-07-15 12:57:12.570684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:56.659 [2024-07-15 12:57:12.681428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.659 [2024-07-15 12:57:12.681489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.659 [2024-07-15 12:57:12.681503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.659 [2024-07-15 12:57:12.681514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.659 [2024-07-15 12:57:12.681523] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.659 [2024-07-15 12:57:12.681671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.659 [2024-07-15 12:57:12.682444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.659 [2024-07-15 12:57:12.682454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.918 [2024-07-15 12:57:12.737477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:57.483 12:57:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.483 12:57:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:57.483 12:57:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.483 12:57:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.483 12:57:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:57.483 12:57:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.483 12:57:13 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.741 [2024-07-15 12:57:13.651716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.741 12:57:13 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:57.998 Malloc0 00:14:57.998 12:57:13 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.255 12:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.512 12:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.769 [2024-07-15 12:57:14.744818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.769 12:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:59.026 [2024-07-15 12:57:14.968937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:59.026 12:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:59.284 [2024-07-15 12:57:15.189115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75821 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75821 /var/tmp/bdevperf.sock 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75821 ']' 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.284 12:57:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:00.218 12:57:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.218 12:57:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:00.218 12:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:00.475 NVMe0n1 00:15:00.475 12:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:01.041 00:15:01.041 12:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75839 00:15:01.041 12:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:01.041 12:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:01.979 12:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.238 [2024-07-15 12:57:18.132317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.132995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.238 [2024-07-15 12:57:18.133091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 [2024-07-15 12:57:18.133292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90950 is same with the state(5) to be set 00:15:02.239 12:57:18 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:05.519 12:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:05.519 00:15:05.519 12:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:05.777 12:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:09.140 12:57:24 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.140 [2024-07-15 12:57:25.034087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.140 12:57:25 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:10.073 12:57:26 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:10.331 12:57:26 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75839 00:15:16.903 0 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75821 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75821 ']' 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75821 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75821 00:15:16.903 killing process with pid 75821 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75821' 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75821 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75821 00:15:16.903 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:16.903 [2024-07-15 12:57:15.246872] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:16.903 [2024-07-15 12:57:15.246955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75821 ] 00:15:16.903 [2024-07-15 12:57:15.382181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.903 [2024-07-15 12:57:15.496182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.903 [2024-07-15 12:57:15.547542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:16.903 Running I/O for 15 seconds... 00:15:16.903 [2024-07-15 12:57:18.133355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.903 [2024-07-15 12:57:18.133959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.903 [2024-07-15 12:57:18.133973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.133989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.134983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.134998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.904 [2024-07-15 12:57:18.135303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.904 [2024-07-15 12:57:18.135318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.135977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.135993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.905 [2024-07-15 12:57:18.136708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-07-15 12:57:18.136722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.136752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.136782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.136811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.136842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.136871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.136901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.136930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.136968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.136985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.906 [2024-07-15 12:57:18.137428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:18.137458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19597c0 is same with the state(5) to be set 00:15:16.906 [2024-07-15 12:57:18.137489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.906 [2024-07-15 12:57:18.137500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.906 [2024-07-15 12:57:18.137516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:15:16.906 [2024-07-15 12:57:18.137529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137586] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19597c0 was disconnected and freed. reset controller. 00:15:16.906 [2024-07-15 12:57:18.137604] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:16.906 [2024-07-15 12:57:18.137658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:18.137678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:18.137706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:18.137733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:18.137760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:18.137773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.906 [2024-07-15 12:57:18.137815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1908570 (9): Bad file descriptor 00:15:16.906 [2024-07-15 12:57:18.141653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.906 [2024-07-15 12:57:18.173661] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.906 [2024-07-15 12:57:21.726848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:21.726908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.726950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:21.726966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.726981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:21.726994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.906 [2024-07-15 12:57:21.727021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1908570 is same with the state(5) to be set 00:15:16.906 [2024-07-15 12:57:21.727101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:21.727123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:21.727162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:21.727192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:21.727221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:21.727251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-07-15 12:57:21.727280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.906 [2024-07-15 12:57:21.727296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.727310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.727339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.907 [2024-07-15 12:57:21.727872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.727901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.727930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.727960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.727975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.727989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-07-15 12:57:21.728353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.907 [2024-07-15 12:57:21.728382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.728871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.728902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.728932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.728962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.728978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.728998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.908 [2024-07-15 12:57:21.729367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.908 [2024-07-15 12:57:21.729675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.908 [2024-07-15 12:57:21.729691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.729705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.729734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.729763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.729801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.729831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.729860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.729890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.729919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.729949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.729978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.729993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.909 [2024-07-15 12:57:21.730858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.730971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.730986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.731002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.909 [2024-07-15 12:57:21.731016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.909 [2024-07-15 12:57:21.731032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:21.731045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:21.731061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:21.731075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:21.731120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.910 [2024-07-15 12:57:21.731135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.910 [2024-07-15 12:57:21.731147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73664 len:8 PRP1 0x0 PRP2 0x0 00:15:16.910 [2024-07-15 12:57:21.731160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:21.731217] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x198ad30 was disconnected and freed. reset controller. 00:15:16.910 [2024-07-15 12:57:21.731234] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:16.910 [2024-07-15 12:57:21.731249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.910 [2024-07-15 12:57:21.735050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.910 [2024-07-15 12:57:21.735090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1908570 (9): Bad file descriptor 00:15:16.910 [2024-07-15 12:57:21.775991] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.910 [2024-07-15 12:57:26.305152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.910 [2024-07-15 12:57:26.305228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.910 [2024-07-15 12:57:26.305263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.910 [2024-07-15 12:57:26.305290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.910 [2024-07-15 12:57:26.305318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1908570 is same with the state(5) to be set 00:15:16.910 [2024-07-15 12:57:26.305444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.305687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.305971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.305987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.910 [2024-07-15 12:57:26.306305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.306336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.306380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.306412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.306441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.910 [2024-07-15 12:57:26.306457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.910 [2024-07-15 12:57:26.306471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.306977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.306991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.911 [2024-07-15 12:57:26.307441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.911 [2024-07-15 12:57:26.307661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.911 [2024-07-15 12:57:26.307677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.307691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.307983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.307997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.912 [2024-07-15 12:57:26.308518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.308980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.912 [2024-07-15 12:57:26.308996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.912 [2024-07-15 12:57:26.309009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.913 [2024-07-15 12:57:26.309483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.913 [2024-07-15 12:57:26.309547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.913 [2024-07-15 12:57:26.309559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24304 len:8 PRP1 0x0 PRP2 0x0 00:15:16.913 [2024-07-15 12:57:26.309572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.913 [2024-07-15 12:57:26.309632] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1989dd0 was disconnected and freed. reset controller. 00:15:16.913 [2024-07-15 12:57:26.309650] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:16.913 [2024-07-15 12:57:26.309665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.913 [2024-07-15 12:57:26.313540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.913 [2024-07-15 12:57:26.313585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1908570 (9): Bad file descriptor 00:15:16.913 [2024-07-15 12:57:26.352204] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.913 00:15:16.913 Latency(us) 00:15:16.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.913 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:16.913 Verification LBA range: start 0x0 length 0x4000 00:15:16.913 NVMe0n1 : 15.01 8901.60 34.77 231.15 0.00 13982.99 614.40 17396.83 00:15:16.913 =================================================================================================================== 00:15:16.913 Total : 8901.60 34.77 231.15 0.00 13982.99 614.40 17396.83 00:15:16.913 Received shutdown signal, test time was about 15.000000 seconds 00:15:16.913 00:15:16.913 Latency(us) 00:15:16.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.913 =================================================================================================================== 00:15:16.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76013 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76013 /var/tmp/bdevperf.sock 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76013 ']' 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.913 12:57:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:17.490 12:57:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.490 12:57:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:17.490 12:57:33 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:17.490 [2024-07-15 12:57:33.475136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:17.490 12:57:33 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:17.761 [2024-07-15 12:57:33.767432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:17.761 12:57:33 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.020 NVMe0n1 00:15:18.279 12:57:34 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.537 00:15:18.537 12:57:34 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.796 00:15:18.796 12:57:34 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:18.796 12:57:34 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:19.055 12:57:35 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:19.314 12:57:35 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:22.622 12:57:38 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:22.622 12:57:38 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:22.622 12:57:38 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76096 00:15:22.622 12:57:38 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:22.622 12:57:38 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76096 00:15:23.993 0 00:15:23.993 12:57:39 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:23.993 [2024-07-15 12:57:32.344672] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:23.993 [2024-07-15 12:57:32.345562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76013 ] 00:15:23.993 [2024-07-15 12:57:32.481118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.993 [2024-07-15 12:57:32.587476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.993 [2024-07-15 12:57:32.640841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:23.993 [2024-07-15 12:57:35.303262] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:23.993 [2024-07-15 12:57:35.303401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.993 [2024-07-15 12:57:35.303427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.993 [2024-07-15 12:57:35.303445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.993 [2024-07-15 12:57:35.303459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.993 [2024-07-15 12:57:35.303473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.993 [2024-07-15 12:57:35.303486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.993 [2024-07-15 12:57:35.303500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.993 [2024-07-15 12:57:35.303513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.993 [2024-07-15 12:57:35.303527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:23.993 [2024-07-15 12:57:35.303576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.993 [2024-07-15 12:57:35.303606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbe570 (9): Bad file descriptor 00:15:23.993 [2024-07-15 12:57:35.306062] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.993 Running I/O for 1 seconds... 00:15:23.993 00:15:23.993 Latency(us) 00:15:23.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.993 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:23.993 Verification LBA range: start 0x0 length 0x4000 00:15:23.993 NVMe0n1 : 1.01 6989.87 27.30 0.00 0.00 18193.14 1750.11 15371.17 00:15:23.993 =================================================================================================================== 00:15:23.994 Total : 6989.87 27.30 0.00 0.00 18193.14 1750.11 15371.17 00:15:23.994 12:57:39 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:23.994 12:57:39 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:24.251 12:57:40 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:24.510 12:57:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:24.510 12:57:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:24.768 12:57:40 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:25.026 12:57:40 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:28.389 12:57:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:28.389 12:57:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76013 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76013 ']' 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76013 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76013 00:15:28.389 killing process with pid 76013 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76013' 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76013 00:15:28.389 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76013 00:15:28.647 12:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:28.647 12:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.906 rmmod nvme_tcp 00:15:28.906 rmmod nvme_fabrics 00:15:28.906 rmmod nvme_keyring 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75758 ']' 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75758 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75758 ']' 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75758 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75758 00:15:28.906 killing process with pid 75758 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75758' 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75758 00:15:28.906 12:57:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75758 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:29.165 00:15:29.165 real 0m33.172s 00:15:29.165 user 2m8.971s 00:15:29.165 sys 0m5.804s 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.165 12:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:29.165 ************************************ 00:15:29.165 END TEST nvmf_failover 00:15:29.165 ************************************ 00:15:29.165 12:57:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:29.165 12:57:45 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:29.165 12:57:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:29.165 12:57:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.165 12:57:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.165 ************************************ 00:15:29.165 START TEST nvmf_host_discovery 00:15:29.165 ************************************ 00:15:29.165 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:29.424 * Looking for test storage... 00:15:29.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:29.424 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:29.425 Cannot find device "nvmf_tgt_br" 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.425 Cannot find device "nvmf_tgt_br2" 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:29.425 Cannot find device "nvmf_tgt_br" 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:29.425 Cannot find device "nvmf_tgt_br2" 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.425 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:29.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:15:29.684 00:15:29.684 --- 10.0.0.2 ping statistics --- 00:15:29.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.684 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:29.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:29.684 00:15:29.684 --- 10.0.0.3 ping statistics --- 00:15:29.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.684 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:29.684 00:15:29.684 --- 10.0.0.1 ping statistics --- 00:15:29.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.684 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76359 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76359 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76359 ']' 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.684 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.685 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.685 12:57:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.685 [2024-07-15 12:57:45.710474] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:29.685 [2024-07-15 12:57:45.710552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.943 [2024-07-15 12:57:45.851689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.943 [2024-07-15 12:57:45.963865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.943 [2024-07-15 12:57:45.963928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.943 [2024-07-15 12:57:45.963939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.943 [2024-07-15 12:57:45.963946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.943 [2024-07-15 12:57:45.963953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.943 [2024-07-15 12:57:45.963975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.201 [2024-07-15 12:57:46.018674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 [2024-07-15 12:57:46.744253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 [2024-07-15 12:57:46.752344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 null0 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 null1 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76397 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76397 /tmp/host.sock 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76397 ']' 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.846 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.846 12:57:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 [2024-07-15 12:57:46.838596] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:30.846 [2024-07-15 12:57:46.838692] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76397 ] 00:15:31.104 [2024-07-15 12:57:46.974832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.104 [2024-07-15 12:57:47.087734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.104 [2024-07-15 12:57:47.142044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.039 12:57:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.039 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 [2024-07-15 12:57:48.244808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:15:32.559 12:57:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:15:33.126 [2024-07-15 12:57:48.897537] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:33.126 [2024-07-15 12:57:48.897572] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:33.126 [2024-07-15 12:57:48.897593] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.126 [2024-07-15 12:57:48.903587] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:33.126 [2024-07-15 12:57:48.960919] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:33.126 [2024-07-15 12:57:48.960951] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:33.737 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 [2024-07-15 12:57:49.786669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:33.738 [2024-07-15 12:57:49.787279] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:33.738 [2024-07-15 12:57:49.787311] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:33.738 [2024-07-15 12:57:49.793268] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.738 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:33.998 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.999 [2024-07-15 12:57:49.851532] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:33.999 [2024-07-15 12:57:49.851561] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:33.999 [2024-07-15 12:57:49.851569] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 12:57:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 [2024-07-15 12:57:50.035597] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:33.999 [2024-07-15 12:57:50.035637] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.999 [2024-07-15 12:57:50.037981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.999 [2024-07-15 12:57:50.038033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.999 [2024-07-15 12:57:50.038047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.999 [2024-07-15 12:57:50.038057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.999 [2024-07-15 12:57:50.038068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.999 [2024-07-15 12:57:50.038078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.999 [2024-07-15 12:57:50.038089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.999 [2024-07-15 12:57:50.038099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.999 [2024-07-15 12:57:50.038109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdb600 is same with the state(5) to be set 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.999 [2024-07-15 12:57:50.041588] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.999 [2024-07-15 12:57:50.041621] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:33.999 [2024-07-15 12:57:50.041697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdb600 (9): Bad file descriptor 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.999 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.259 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.519 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:34.519 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.520 12:57:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.456 [2024-07-15 12:57:51.478132] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:35.457 [2024-07-15 12:57:51.478173] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:35.457 [2024-07-15 12:57:51.478224] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:35.457 [2024-07-15 12:57:51.484165] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:35.715 [2024-07-15 12:57:51.544986] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:35.715 [2024-07-15 12:57:51.545058] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.715 request: 00:15:35.715 { 00:15:35.715 "name": "nvme", 00:15:35.715 "trtype": "tcp", 00:15:35.715 "traddr": "10.0.0.2", 00:15:35.715 "adrfam": "ipv4", 00:15:35.715 "trsvcid": "8009", 00:15:35.715 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:35.715 "wait_for_attach": true, 00:15:35.715 "method": "bdev_nvme_start_discovery", 00:15:35.715 "req_id": 1 00:15:35.715 } 00:15:35.715 Got JSON-RPC error response 00:15:35.715 response: 00:15:35.715 { 00:15:35.715 "code": -17, 00:15:35.715 "message": "File exists" 00:15:35.715 } 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:35.715 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.716 request: 00:15:35.716 { 00:15:35.716 "name": "nvme_second", 00:15:35.716 "trtype": "tcp", 00:15:35.716 "traddr": "10.0.0.2", 00:15:35.716 "adrfam": "ipv4", 00:15:35.716 "trsvcid": "8009", 00:15:35.716 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:35.716 "wait_for_attach": true, 00:15:35.716 "method": "bdev_nvme_start_discovery", 00:15:35.716 "req_id": 1 00:15:35.716 } 00:15:35.716 Got JSON-RPC error response 00:15:35.716 response: 00:15:35.716 { 00:15:35.716 "code": -17, 00:15:35.716 "message": "File exists" 00:15:35.716 } 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.716 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.974 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.975 12:57:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.909 [2024-07-15 12:57:52.821549] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:36.909 [2024-07-15 12:57:52.821618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf4f20 with addr=10.0.0.2, port=8010 00:15:36.909 [2024-07-15 12:57:52.821644] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:36.909 [2024-07-15 12:57:52.821655] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:36.909 [2024-07-15 12:57:52.821666] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:37.842 [2024-07-15 12:57:53.821612] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:37.842 [2024-07-15 12:57:53.821680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf4f20 with addr=10.0.0.2, port=8010 00:15:37.842 [2024-07-15 12:57:53.821704] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:37.842 [2024-07-15 12:57:53.821714] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:37.842 [2024-07-15 12:57:53.821724] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:38.776 [2024-07-15 12:57:54.821439] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:38.776 request: 00:15:38.776 { 00:15:38.776 "name": "nvme_second", 00:15:38.776 "trtype": "tcp", 00:15:38.776 "traddr": "10.0.0.2", 00:15:38.776 "adrfam": "ipv4", 00:15:38.776 "trsvcid": "8010", 00:15:38.776 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:38.776 "wait_for_attach": false, 00:15:38.776 "attach_timeout_ms": 3000, 00:15:38.776 "method": "bdev_nvme_start_discovery", 00:15:38.776 "req_id": 1 00:15:38.776 } 00:15:38.776 Got JSON-RPC error response 00:15:38.776 response: 00:15:38.776 { 00:15:38.776 "code": -110, 00:15:38.776 "message": "Connection timed out" 00:15:38.776 } 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:38.776 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:39.035 12:57:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.035 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:39.035 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:39.035 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76397 00:15:39.035 12:57:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:39.035 12:57:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.035 12:57:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.294 rmmod nvme_tcp 00:15:39.294 rmmod nvme_fabrics 00:15:39.294 rmmod nvme_keyring 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76359 ']' 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76359 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76359 ']' 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76359 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76359 00:15:39.294 killing process with pid 76359 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76359' 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76359 00:15:39.294 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76359 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:39.553 ************************************ 00:15:39.553 END TEST nvmf_host_discovery 00:15:39.553 ************************************ 00:15:39.553 00:15:39.553 real 0m10.338s 00:15:39.553 user 0m19.782s 00:15:39.553 sys 0m1.977s 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.553 12:57:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:39.553 12:57:55 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:39.553 12:57:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:39.553 12:57:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.553 12:57:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:39.553 ************************************ 00:15:39.553 START TEST nvmf_host_multipath_status 00:15:39.553 ************************************ 00:15:39.553 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:39.812 * Looking for test storage... 00:15:39.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.812 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:39.813 Cannot find device "nvmf_tgt_br" 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.813 Cannot find device "nvmf_tgt_br2" 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:39.813 Cannot find device "nvmf_tgt_br" 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:39.813 Cannot find device "nvmf_tgt_br2" 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.813 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.072 12:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:40.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:15:40.072 00:15:40.072 --- 10.0.0.2 ping statistics --- 00:15:40.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.072 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:40.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:40.072 00:15:40.072 --- 10.0.0.3 ping statistics --- 00:15:40.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.072 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:40.072 00:15:40.072 --- 10.0.0.1 ping statistics --- 00:15:40.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.072 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76852 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76852 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76852 ']' 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.072 12:57:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 [2024-07-15 12:57:56.114487] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:40.072 [2024-07-15 12:57:56.114598] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.331 [2024-07-15 12:57:56.254851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:40.331 [2024-07-15 12:57:56.369422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.331 [2024-07-15 12:57:56.369492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.331 [2024-07-15 12:57:56.369503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.331 [2024-07-15 12:57:56.369512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.331 [2024-07-15 12:57:56.369519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.331 [2024-07-15 12:57:56.369672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.331 [2024-07-15 12:57:56.369748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.591 [2024-07-15 12:57:56.425140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76852 00:15:41.167 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:41.426 [2024-07-15 12:57:57.348477] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.426 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:41.685 Malloc0 00:15:41.685 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:41.944 12:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:42.202 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.461 [2024-07-15 12:57:58.446218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.461 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:42.719 [2024-07-15 12:57:58.710499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76906 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76906 /var/tmp/bdevperf.sock 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76906 ']' 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.719 12:57:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:44.094 12:57:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.094 12:57:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:44.094 12:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:44.094 12:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:44.353 Nvme0n1 00:15:44.353 12:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:44.612 Nvme0n1 00:15:44.612 12:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:44.612 12:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:47.143 12:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:47.143 12:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:47.143 12:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:47.400 12:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.399 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:48.965 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:48.965 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:48.965 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.965 12:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:49.225 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.225 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:49.225 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.225 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:49.483 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.483 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:49.483 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.483 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:49.741 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.741 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:49.741 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:49.741 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.999 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.999 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:49.999 12:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:50.256 12:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:50.513 12:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:51.447 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:51.447 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:51.447 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.448 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:51.706 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:51.706 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:51.706 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.706 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:51.965 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.965 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:51.965 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.965 12:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:52.224 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.224 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:52.224 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.224 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:52.482 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.482 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:52.482 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.482 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:52.740 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.740 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:52.740 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.740 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:52.997 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.997 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:52.997 12:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:53.255 12:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:53.513 12:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:54.466 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:54.466 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:54.466 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:54.466 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.775 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.775 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:54.775 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.775 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:55.033 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:55.033 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:55.033 12:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.033 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:55.291 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.291 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:55.291 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.291 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:55.549 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.549 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:55.549 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.549 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:55.807 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.807 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:55.807 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.807 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:56.065 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.065 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:56.065 12:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:56.324 12:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:56.583 12:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:57.518 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:57.518 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:57.518 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.518 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:57.777 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.777 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:57.777 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.777 12:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:58.345 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:58.345 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:58.345 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:58.345 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.605 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.605 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.605 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.605 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.863 12:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:59.427 12:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:59.427 12:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:59.427 12:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:59.427 12:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:59.684 12:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:01.056 12:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:01.056 12:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:01.056 12:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.056 12:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:01.056 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.056 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:01.056 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.056 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.313 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.313 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.313 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.313 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.608 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.608 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.608 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.608 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.865 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.865 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:01.865 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.865 12:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:02.124 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.124 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:02.124 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.124 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:02.382 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.382 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:02.382 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:02.640 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:02.898 12:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:03.834 12:58:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:03.834 12:58:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:03.834 12:58:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.834 12:58:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.092 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.092 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:04.092 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.092 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.351 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.351 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.351 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.351 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.609 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.609 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:04.609 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.609 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:04.868 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.868 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:04.868 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.868 12:58:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.125 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:05.125 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:05.125 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.125 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:05.383 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.383 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:05.641 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:05.641 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:05.900 12:58:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:06.159 12:58:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:07.094 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:07.094 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:07.094 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.094 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:07.659 12:58:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.276 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:08.532 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.532 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:08.532 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.532 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:08.789 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.789 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:08.789 12:58:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:09.355 12:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:09.355 12:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:10.302 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:10.302 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:10.302 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.302 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:10.865 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.865 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:10.865 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.865 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.866 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.866 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.866 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.866 12:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.123 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.123 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.123 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.123 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.730 12:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.988 12:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.988 12:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:11.988 12:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:12.246 12:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:12.504 12:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:13.439 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:13.439 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:13.439 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.439 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:13.698 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.698 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:13.698 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.698 12:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.265 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.523 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.523 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.523 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.523 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.782 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.782 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:14.782 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.782 12:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.041 12:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.041 12:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:15.041 12:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:15.300 12:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:15.568 12:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:16.533 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:16.533 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.533 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.533 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.791 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.791 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:16.791 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.791 12:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.050 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.050 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.050 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.050 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.310 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.310 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.310 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.310 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.569 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.569 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.569 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.569 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.827 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.827 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:17.827 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.827 12:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76906 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76906 ']' 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76906 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76906 00:16:18.084 killing process with pid 76906 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:18.084 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:18.085 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76906' 00:16:18.085 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76906 00:16:18.085 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76906 00:16:18.346 Connection closed with partial response: 00:16:18.346 00:16:18.346 00:16:18.346 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76906 00:16:18.346 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:18.346 [2024-07-15 12:57:58.778279] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:18.346 [2024-07-15 12:57:58.778408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76906 ] 00:16:18.346 [2024-07-15 12:57:58.915105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.346 [2024-07-15 12:57:59.036089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.346 [2024-07-15 12:57:59.090153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:18.346 Running I/O for 90 seconds... 00:16:18.346 [2024-07-15 12:58:15.439532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.346 [2024-07-15 12:58:15.439612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:18.346 [2024-07-15 12:58:15.439672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.346 [2024-07-15 12:58:15.439693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:18.346 [2024-07-15 12:58:15.439715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.439760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.439794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.439830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.439865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.439899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.439935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.439969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.439983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.440530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.440971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.440992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.347 [2024-07-15 12:58:15.441574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:18.347 [2024-07-15 12:58:15.441888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.347 [2024-07-15 12:58:15.441902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.441923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.441937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.441958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.441972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.441992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.442834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.442972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.442987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.348 [2024-07-15 12:58:15.443799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.443980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.443994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.444015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.444029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.444050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.348 [2024-07-15 12:58:15.444064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:18.348 [2024-07-15 12:58:15.444815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:15.444843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.444889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.444923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.444961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.444977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.445008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.445023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.445052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.445067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.445097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.445112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.445141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.445156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.445186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.445200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:15.445246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:15.445266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.512242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.512318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.512785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.512819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.512887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.512974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.512995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.349 [2024-07-15 12:58:31.513734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.349 [2024-07-15 12:58:31.513817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:18.349 [2024-07-15 12:58:31.513838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.513860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.513882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.513897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.513918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.513932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.515966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.515993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.350 [2024-07-15 12:58:31.516506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:18.350 [2024-07-15 12:58:31.516874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.350 [2024-07-15 12:58:31.516888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:18.350 Received shutdown signal, test time was about 33.359687 seconds 00:16:18.350 00:16:18.350 Latency(us) 00:16:18.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.350 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:18.350 Verification LBA range: start 0x0 length 0x4000 00:16:18.350 Nvme0n1 : 33.36 8373.96 32.71 0.00 0.00 15253.10 733.56 4026531.84 00:16:18.350 =================================================================================================================== 00:16:18.350 Total : 8373.96 32.71 0.00 0.00 15253.10 733.56 4026531.84 00:16:18.350 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.608 rmmod nvme_tcp 00:16:18.608 rmmod nvme_fabrics 00:16:18.608 rmmod nvme_keyring 00:16:18.608 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76852 ']' 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76852 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76852 ']' 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76852 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76852 00:16:18.866 killing process with pid 76852 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76852' 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76852 00:16:18.866 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76852 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.123 00:16:19.123 real 0m39.390s 00:16:19.123 user 2m7.483s 00:16:19.123 sys 0m11.468s 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.123 12:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 ************************************ 00:16:19.123 END TEST nvmf_host_multipath_status 00:16:19.123 ************************************ 00:16:19.123 12:58:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.123 12:58:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:19.123 12:58:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.123 12:58:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.123 12:58:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 ************************************ 00:16:19.123 START TEST nvmf_discovery_remove_ifc 00:16:19.123 ************************************ 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:19.123 * Looking for test storage... 00:16:19.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.123 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.124 Cannot find device "nvmf_tgt_br" 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.124 Cannot find device "nvmf_tgt_br2" 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:19.124 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.383 Cannot find device "nvmf_tgt_br" 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.383 Cannot find device "nvmf_tgt_br2" 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:19.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:16:19.383 00:16:19.383 --- 10.0.0.2 ping statistics --- 00:16:19.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.383 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:19.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:16:19.383 00:16:19.383 --- 10.0.0.3 ping statistics --- 00:16:19.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.383 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:19.383 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:19.642 00:16:19.642 --- 10.0.0.1 ping statistics --- 00:16:19.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.642 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77690 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77690 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77690 ']' 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.642 12:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.642 [2024-07-15 12:58:35.532135] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:19.642 [2024-07-15 12:58:35.532248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.642 [2024-07-15 12:58:35.671830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.900 [2024-07-15 12:58:35.770478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.900 [2024-07-15 12:58:35.770530] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.900 [2024-07-15 12:58:35.770556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.900 [2024-07-15 12:58:35.770564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.900 [2024-07-15 12:58:35.770570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.900 [2024-07-15 12:58:35.770599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.900 [2024-07-15 12:58:35.824076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.510 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.510 [2024-07-15 12:58:36.544951] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.510 [2024-07-15 12:58:36.553070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:20.510 null0 00:16:20.767 [2024-07-15 12:58:36.584987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.767 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77722 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77722 /tmp/host.sock 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77722 ']' 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.767 12:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.767 [2024-07-15 12:58:36.660861] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:20.767 [2024-07-15 12:58:36.660945] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77722 ] 00:16:20.767 [2024-07-15 12:58:36.803474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.023 [2024-07-15 12:58:36.916814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.966 [2024-07-15 12:58:37.727629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.966 12:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 [2024-07-15 12:58:38.778248] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:22.905 [2024-07-15 12:58:38.778274] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:22.905 [2024-07-15 12:58:38.778293] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:22.905 [2024-07-15 12:58:38.784295] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:22.905 [2024-07-15 12:58:38.841686] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:22.905 [2024-07-15 12:58:38.841926] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:22.905 [2024-07-15 12:58:38.842002] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:22.905 [2024-07-15 12:58:38.842133] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:22.905 [2024-07-15 12:58:38.842216] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.905 [2024-07-15 12:58:38.846771] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa99de0 was disconnected and freed. delete nvme_qpair. 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.164 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:23.164 12:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:24.099 12:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.099 12:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.099 12:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.099 12:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.099 12:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.099 12:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.099 12:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.099 12:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.099 12:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:24.099 12:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.037 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.295 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:25.295 12:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:26.229 12:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:27.165 12:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.564 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:28.565 12:58:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.565 [2024-07-15 12:58:44.279755] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:28.565 [2024-07-15 12:58:44.279824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.565 [2024-07-15 12:58:44.279839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.565 [2024-07-15 12:58:44.279852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.565 [2024-07-15 12:58:44.279861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.565 [2024-07-15 12:58:44.279871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.565 [2024-07-15 12:58:44.279885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.565 [2024-07-15 12:58:44.279895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.565 [2024-07-15 12:58:44.279905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.565 [2024-07-15 12:58:44.279914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.565 [2024-07-15 12:58:44.279923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.565 [2024-07-15 12:58:44.279933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ffac0 is same with the state(5) to be set 00:16:28.565 [2024-07-15 12:58:44.289753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ffac0 (9): Bad file descriptor 00:16:28.565 [2024-07-15 12:58:44.299775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.528 [2024-07-15 12:58:45.349493] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:29.528 [2024-07-15 12:58:45.349941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ffac0 with addr=10.0.0.2, port=4420 00:16:29.528 [2024-07-15 12:58:45.349995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ffac0 is same with the state(5) to be set 00:16:29.528 [2024-07-15 12:58:45.350074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ffac0 (9): Bad file descriptor 00:16:29.528 [2024-07-15 12:58:45.351011] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:29.528 [2024-07-15 12:58:45.351074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:29.528 [2024-07-15 12:58:45.351097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:29.528 [2024-07-15 12:58:45.351119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:29.528 [2024-07-15 12:58:45.351186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:29.528 [2024-07-15 12:58:45.351213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:29.528 12:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.462 [2024-07-15 12:58:46.351279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:30.462 [2024-07-15 12:58:46.351329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:30.462 [2024-07-15 12:58:46.351358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:30.462 [2024-07-15 12:58:46.351368] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:30.462 [2024-07-15 12:58:46.351408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:30.462 [2024-07-15 12:58:46.351454] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:30.462 [2024-07-15 12:58:46.351530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.462 [2024-07-15 12:58:46.351546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.462 [2024-07-15 12:58:46.351559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.462 [2024-07-15 12:58:46.351568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.462 [2024-07-15 12:58:46.351577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.462 [2024-07-15 12:58:46.351585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.462 [2024-07-15 12:58:46.351594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.462 [2024-07-15 12:58:46.351602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.462 [2024-07-15 12:58:46.351612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.462 [2024-07-15 12:58:46.351620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.462 [2024-07-15 12:58:46.351629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:30.462 [2024-07-15 12:58:46.351673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa03860 (9): Bad file descriptor 00:16:30.462 [2024-07-15 12:58:46.352659] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:30.462 [2024-07-15 12:58:46.352685] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:30.462 12:58:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:31.835 12:58:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.403 [2024-07-15 12:58:48.363070] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:32.403 [2024-07-15 12:58:48.363095] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:32.403 [2024-07-15 12:58:48.363113] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:32.403 [2024-07-15 12:58:48.369121] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:32.403 [2024-07-15 12:58:48.425557] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:32.403 [2024-07-15 12:58:48.425602] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:32.403 [2024-07-15 12:58:48.425625] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:32.403 [2024-07-15 12:58:48.425640] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:32.403 [2024-07-15 12:58:48.425648] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:32.403 [2024-07-15 12:58:48.431566] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xaa6d90 was disconnected and freed. delete nvme_qpair. 00:16:32.661 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.661 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.661 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77722 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77722 ']' 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77722 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77722 00:16:32.662 killing process with pid 77722 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77722' 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77722 00:16:32.662 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77722 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.977 rmmod nvme_tcp 00:16:32.977 rmmod nvme_fabrics 00:16:32.977 rmmod nvme_keyring 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77690 ']' 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77690 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77690 ']' 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77690 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77690 00:16:32.977 killing process with pid 77690 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77690' 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77690 00:16:32.977 12:58:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77690 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:33.236 00:16:33.236 real 0m14.210s 00:16:33.236 user 0m24.624s 00:16:33.236 sys 0m2.528s 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.236 12:58:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.236 ************************************ 00:16:33.236 END TEST nvmf_discovery_remove_ifc 00:16:33.236 ************************************ 00:16:33.236 12:58:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:33.237 12:58:49 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:33.237 12:58:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:33.237 12:58:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.237 12:58:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.237 ************************************ 00:16:33.237 START TEST nvmf_identify_kernel_target 00:16:33.237 ************************************ 00:16:33.237 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:33.496 * Looking for test storage... 00:16:33.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.496 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:33.497 Cannot find device "nvmf_tgt_br" 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.497 Cannot find device "nvmf_tgt_br2" 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:33.497 Cannot find device "nvmf_tgt_br" 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:33.497 Cannot find device "nvmf_tgt_br2" 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.497 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:33.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:33.757 00:16:33.757 --- 10.0.0.2 ping statistics --- 00:16:33.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.757 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:33.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:33.757 00:16:33.757 --- 10.0.0.3 ping statistics --- 00:16:33.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.757 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:33.757 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:33.758 00:16:33.758 --- 10.0.0.1 ping statistics --- 00:16:33.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.758 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:33.758 12:58:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:34.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.325 Waiting for block devices as requested 00:16:34.325 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:34.325 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:34.325 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:34.325 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:34.325 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:34.325 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:34.326 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:34.326 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.326 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:34.326 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:34.326 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:34.326 No valid GPT data, bailing 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:34.585 No valid GPT data, bailing 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:34.585 No valid GPT data, bailing 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:34.585 No valid GPT data, bailing 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:34.585 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -a 10.0.0.1 -t tcp -s 4420 00:16:34.845 00:16:34.845 Discovery Log Number of Records 2, Generation counter 2 00:16:34.845 =====Discovery Log Entry 0====== 00:16:34.845 trtype: tcp 00:16:34.845 adrfam: ipv4 00:16:34.845 subtype: current discovery subsystem 00:16:34.845 treq: not specified, sq flow control disable supported 00:16:34.845 portid: 1 00:16:34.845 trsvcid: 4420 00:16:34.845 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:34.845 traddr: 10.0.0.1 00:16:34.845 eflags: none 00:16:34.845 sectype: none 00:16:34.845 =====Discovery Log Entry 1====== 00:16:34.845 trtype: tcp 00:16:34.845 adrfam: ipv4 00:16:34.845 subtype: nvme subsystem 00:16:34.845 treq: not specified, sq flow control disable supported 00:16:34.845 portid: 1 00:16:34.845 trsvcid: 4420 00:16:34.845 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:34.845 traddr: 10.0.0.1 00:16:34.845 eflags: none 00:16:34.845 sectype: none 00:16:34.845 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:34.845 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:34.845 ===================================================== 00:16:34.845 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:34.845 ===================================================== 00:16:34.845 Controller Capabilities/Features 00:16:34.845 ================================ 00:16:34.845 Vendor ID: 0000 00:16:34.845 Subsystem Vendor ID: 0000 00:16:34.845 Serial Number: d1b75ff9adf261363709 00:16:34.845 Model Number: Linux 00:16:34.845 Firmware Version: 6.7.0-68 00:16:34.845 Recommended Arb Burst: 0 00:16:34.845 IEEE OUI Identifier: 00 00 00 00:16:34.845 Multi-path I/O 00:16:34.845 May have multiple subsystem ports: No 00:16:34.845 May have multiple controllers: No 00:16:34.845 Associated with SR-IOV VF: No 00:16:34.845 Max Data Transfer Size: Unlimited 00:16:34.845 Max Number of Namespaces: 0 00:16:34.845 Max Number of I/O Queues: 1024 00:16:34.845 NVMe Specification Version (VS): 1.3 00:16:34.845 NVMe Specification Version (Identify): 1.3 00:16:34.845 Maximum Queue Entries: 1024 00:16:34.845 Contiguous Queues Required: No 00:16:34.845 Arbitration Mechanisms Supported 00:16:34.845 Weighted Round Robin: Not Supported 00:16:34.845 Vendor Specific: Not Supported 00:16:34.845 Reset Timeout: 7500 ms 00:16:34.845 Doorbell Stride: 4 bytes 00:16:34.845 NVM Subsystem Reset: Not Supported 00:16:34.845 Command Sets Supported 00:16:34.845 NVM Command Set: Supported 00:16:34.845 Boot Partition: Not Supported 00:16:34.845 Memory Page Size Minimum: 4096 bytes 00:16:34.845 Memory Page Size Maximum: 4096 bytes 00:16:34.845 Persistent Memory Region: Not Supported 00:16:34.845 Optional Asynchronous Events Supported 00:16:34.845 Namespace Attribute Notices: Not Supported 00:16:34.845 Firmware Activation Notices: Not Supported 00:16:34.845 ANA Change Notices: Not Supported 00:16:34.845 PLE Aggregate Log Change Notices: Not Supported 00:16:34.845 LBA Status Info Alert Notices: Not Supported 00:16:34.845 EGE Aggregate Log Change Notices: Not Supported 00:16:34.845 Normal NVM Subsystem Shutdown event: Not Supported 00:16:34.845 Zone Descriptor Change Notices: Not Supported 00:16:34.845 Discovery Log Change Notices: Supported 00:16:34.845 Controller Attributes 00:16:34.845 128-bit Host Identifier: Not Supported 00:16:34.845 Non-Operational Permissive Mode: Not Supported 00:16:34.845 NVM Sets: Not Supported 00:16:34.845 Read Recovery Levels: Not Supported 00:16:34.845 Endurance Groups: Not Supported 00:16:34.845 Predictable Latency Mode: Not Supported 00:16:34.845 Traffic Based Keep ALive: Not Supported 00:16:34.845 Namespace Granularity: Not Supported 00:16:34.845 SQ Associations: Not Supported 00:16:34.845 UUID List: Not Supported 00:16:34.845 Multi-Domain Subsystem: Not Supported 00:16:34.845 Fixed Capacity Management: Not Supported 00:16:34.845 Variable Capacity Management: Not Supported 00:16:34.845 Delete Endurance Group: Not Supported 00:16:34.845 Delete NVM Set: Not Supported 00:16:34.845 Extended LBA Formats Supported: Not Supported 00:16:34.845 Flexible Data Placement Supported: Not Supported 00:16:34.845 00:16:34.845 Controller Memory Buffer Support 00:16:34.845 ================================ 00:16:34.845 Supported: No 00:16:34.845 00:16:34.845 Persistent Memory Region Support 00:16:34.845 ================================ 00:16:34.845 Supported: No 00:16:34.845 00:16:34.845 Admin Command Set Attributes 00:16:34.845 ============================ 00:16:34.845 Security Send/Receive: Not Supported 00:16:34.845 Format NVM: Not Supported 00:16:34.845 Firmware Activate/Download: Not Supported 00:16:34.845 Namespace Management: Not Supported 00:16:34.845 Device Self-Test: Not Supported 00:16:34.845 Directives: Not Supported 00:16:34.845 NVMe-MI: Not Supported 00:16:34.845 Virtualization Management: Not Supported 00:16:34.846 Doorbell Buffer Config: Not Supported 00:16:34.846 Get LBA Status Capability: Not Supported 00:16:34.846 Command & Feature Lockdown Capability: Not Supported 00:16:34.846 Abort Command Limit: 1 00:16:34.846 Async Event Request Limit: 1 00:16:34.846 Number of Firmware Slots: N/A 00:16:34.846 Firmware Slot 1 Read-Only: N/A 00:16:34.846 Firmware Activation Without Reset: N/A 00:16:34.846 Multiple Update Detection Support: N/A 00:16:34.846 Firmware Update Granularity: No Information Provided 00:16:34.846 Per-Namespace SMART Log: No 00:16:34.846 Asymmetric Namespace Access Log Page: Not Supported 00:16:34.846 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:34.846 Command Effects Log Page: Not Supported 00:16:34.846 Get Log Page Extended Data: Supported 00:16:34.846 Telemetry Log Pages: Not Supported 00:16:34.846 Persistent Event Log Pages: Not Supported 00:16:34.846 Supported Log Pages Log Page: May Support 00:16:34.846 Commands Supported & Effects Log Page: Not Supported 00:16:34.846 Feature Identifiers & Effects Log Page:May Support 00:16:34.846 NVMe-MI Commands & Effects Log Page: May Support 00:16:34.846 Data Area 4 for Telemetry Log: Not Supported 00:16:34.846 Error Log Page Entries Supported: 1 00:16:34.846 Keep Alive: Not Supported 00:16:34.846 00:16:34.846 NVM Command Set Attributes 00:16:34.846 ========================== 00:16:34.846 Submission Queue Entry Size 00:16:34.846 Max: 1 00:16:34.846 Min: 1 00:16:34.846 Completion Queue Entry Size 00:16:34.846 Max: 1 00:16:34.846 Min: 1 00:16:34.846 Number of Namespaces: 0 00:16:34.846 Compare Command: Not Supported 00:16:34.846 Write Uncorrectable Command: Not Supported 00:16:34.846 Dataset Management Command: Not Supported 00:16:34.846 Write Zeroes Command: Not Supported 00:16:34.846 Set Features Save Field: Not Supported 00:16:34.846 Reservations: Not Supported 00:16:34.846 Timestamp: Not Supported 00:16:34.846 Copy: Not Supported 00:16:34.846 Volatile Write Cache: Not Present 00:16:34.846 Atomic Write Unit (Normal): 1 00:16:34.846 Atomic Write Unit (PFail): 1 00:16:34.846 Atomic Compare & Write Unit: 1 00:16:34.846 Fused Compare & Write: Not Supported 00:16:34.846 Scatter-Gather List 00:16:34.846 SGL Command Set: Supported 00:16:34.846 SGL Keyed: Not Supported 00:16:34.846 SGL Bit Bucket Descriptor: Not Supported 00:16:34.846 SGL Metadata Pointer: Not Supported 00:16:34.846 Oversized SGL: Not Supported 00:16:34.846 SGL Metadata Address: Not Supported 00:16:34.846 SGL Offset: Supported 00:16:34.846 Transport SGL Data Block: Not Supported 00:16:34.846 Replay Protected Memory Block: Not Supported 00:16:34.846 00:16:34.846 Firmware Slot Information 00:16:34.846 ========================= 00:16:34.846 Active slot: 0 00:16:34.846 00:16:34.846 00:16:34.846 Error Log 00:16:34.846 ========= 00:16:34.846 00:16:34.846 Active Namespaces 00:16:34.846 ================= 00:16:34.846 Discovery Log Page 00:16:34.846 ================== 00:16:34.846 Generation Counter: 2 00:16:34.846 Number of Records: 2 00:16:34.846 Record Format: 0 00:16:34.846 00:16:34.846 Discovery Log Entry 0 00:16:34.846 ---------------------- 00:16:34.846 Transport Type: 3 (TCP) 00:16:34.846 Address Family: 1 (IPv4) 00:16:34.846 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:34.846 Entry Flags: 00:16:34.846 Duplicate Returned Information: 0 00:16:34.846 Explicit Persistent Connection Support for Discovery: 0 00:16:34.846 Transport Requirements: 00:16:34.846 Secure Channel: Not Specified 00:16:34.846 Port ID: 1 (0x0001) 00:16:34.846 Controller ID: 65535 (0xffff) 00:16:34.846 Admin Max SQ Size: 32 00:16:34.846 Transport Service Identifier: 4420 00:16:34.846 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:34.846 Transport Address: 10.0.0.1 00:16:34.846 Discovery Log Entry 1 00:16:34.846 ---------------------- 00:16:34.846 Transport Type: 3 (TCP) 00:16:34.846 Address Family: 1 (IPv4) 00:16:34.846 Subsystem Type: 2 (NVM Subsystem) 00:16:34.846 Entry Flags: 00:16:34.846 Duplicate Returned Information: 0 00:16:34.846 Explicit Persistent Connection Support for Discovery: 0 00:16:34.846 Transport Requirements: 00:16:34.846 Secure Channel: Not Specified 00:16:34.846 Port ID: 1 (0x0001) 00:16:34.846 Controller ID: 65535 (0xffff) 00:16:34.846 Admin Max SQ Size: 32 00:16:34.846 Transport Service Identifier: 4420 00:16:34.846 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:34.846 Transport Address: 10.0.0.1 00:16:34.846 12:58:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:35.105 get_feature(0x01) failed 00:16:35.105 get_feature(0x02) failed 00:16:35.105 get_feature(0x04) failed 00:16:35.105 ===================================================== 00:16:35.105 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:35.105 ===================================================== 00:16:35.105 Controller Capabilities/Features 00:16:35.105 ================================ 00:16:35.105 Vendor ID: 0000 00:16:35.105 Subsystem Vendor ID: 0000 00:16:35.105 Serial Number: 0c1d5ee71afb0ab388b4 00:16:35.105 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:35.105 Firmware Version: 6.7.0-68 00:16:35.105 Recommended Arb Burst: 6 00:16:35.105 IEEE OUI Identifier: 00 00 00 00:16:35.105 Multi-path I/O 00:16:35.105 May have multiple subsystem ports: Yes 00:16:35.105 May have multiple controllers: Yes 00:16:35.105 Associated with SR-IOV VF: No 00:16:35.105 Max Data Transfer Size: Unlimited 00:16:35.105 Max Number of Namespaces: 1024 00:16:35.105 Max Number of I/O Queues: 128 00:16:35.105 NVMe Specification Version (VS): 1.3 00:16:35.105 NVMe Specification Version (Identify): 1.3 00:16:35.105 Maximum Queue Entries: 1024 00:16:35.105 Contiguous Queues Required: No 00:16:35.105 Arbitration Mechanisms Supported 00:16:35.105 Weighted Round Robin: Not Supported 00:16:35.105 Vendor Specific: Not Supported 00:16:35.105 Reset Timeout: 7500 ms 00:16:35.105 Doorbell Stride: 4 bytes 00:16:35.105 NVM Subsystem Reset: Not Supported 00:16:35.105 Command Sets Supported 00:16:35.105 NVM Command Set: Supported 00:16:35.105 Boot Partition: Not Supported 00:16:35.105 Memory Page Size Minimum: 4096 bytes 00:16:35.105 Memory Page Size Maximum: 4096 bytes 00:16:35.105 Persistent Memory Region: Not Supported 00:16:35.105 Optional Asynchronous Events Supported 00:16:35.105 Namespace Attribute Notices: Supported 00:16:35.105 Firmware Activation Notices: Not Supported 00:16:35.105 ANA Change Notices: Supported 00:16:35.105 PLE Aggregate Log Change Notices: Not Supported 00:16:35.105 LBA Status Info Alert Notices: Not Supported 00:16:35.105 EGE Aggregate Log Change Notices: Not Supported 00:16:35.105 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.105 Zone Descriptor Change Notices: Not Supported 00:16:35.105 Discovery Log Change Notices: Not Supported 00:16:35.105 Controller Attributes 00:16:35.105 128-bit Host Identifier: Supported 00:16:35.105 Non-Operational Permissive Mode: Not Supported 00:16:35.105 NVM Sets: Not Supported 00:16:35.105 Read Recovery Levels: Not Supported 00:16:35.105 Endurance Groups: Not Supported 00:16:35.105 Predictable Latency Mode: Not Supported 00:16:35.105 Traffic Based Keep ALive: Supported 00:16:35.105 Namespace Granularity: Not Supported 00:16:35.105 SQ Associations: Not Supported 00:16:35.105 UUID List: Not Supported 00:16:35.105 Multi-Domain Subsystem: Not Supported 00:16:35.105 Fixed Capacity Management: Not Supported 00:16:35.105 Variable Capacity Management: Not Supported 00:16:35.105 Delete Endurance Group: Not Supported 00:16:35.105 Delete NVM Set: Not Supported 00:16:35.105 Extended LBA Formats Supported: Not Supported 00:16:35.105 Flexible Data Placement Supported: Not Supported 00:16:35.105 00:16:35.105 Controller Memory Buffer Support 00:16:35.105 ================================ 00:16:35.105 Supported: No 00:16:35.105 00:16:35.105 Persistent Memory Region Support 00:16:35.105 ================================ 00:16:35.105 Supported: No 00:16:35.105 00:16:35.105 Admin Command Set Attributes 00:16:35.105 ============================ 00:16:35.105 Security Send/Receive: Not Supported 00:16:35.105 Format NVM: Not Supported 00:16:35.105 Firmware Activate/Download: Not Supported 00:16:35.105 Namespace Management: Not Supported 00:16:35.105 Device Self-Test: Not Supported 00:16:35.105 Directives: Not Supported 00:16:35.105 NVMe-MI: Not Supported 00:16:35.105 Virtualization Management: Not Supported 00:16:35.105 Doorbell Buffer Config: Not Supported 00:16:35.105 Get LBA Status Capability: Not Supported 00:16:35.105 Command & Feature Lockdown Capability: Not Supported 00:16:35.105 Abort Command Limit: 4 00:16:35.105 Async Event Request Limit: 4 00:16:35.105 Number of Firmware Slots: N/A 00:16:35.105 Firmware Slot 1 Read-Only: N/A 00:16:35.105 Firmware Activation Without Reset: N/A 00:16:35.105 Multiple Update Detection Support: N/A 00:16:35.105 Firmware Update Granularity: No Information Provided 00:16:35.105 Per-Namespace SMART Log: Yes 00:16:35.105 Asymmetric Namespace Access Log Page: Supported 00:16:35.105 ANA Transition Time : 10 sec 00:16:35.105 00:16:35.105 Asymmetric Namespace Access Capabilities 00:16:35.105 ANA Optimized State : Supported 00:16:35.105 ANA Non-Optimized State : Supported 00:16:35.105 ANA Inaccessible State : Supported 00:16:35.105 ANA Persistent Loss State : Supported 00:16:35.105 ANA Change State : Supported 00:16:35.105 ANAGRPID is not changed : No 00:16:35.105 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:35.105 00:16:35.105 ANA Group Identifier Maximum : 128 00:16:35.105 Number of ANA Group Identifiers : 128 00:16:35.105 Max Number of Allowed Namespaces : 1024 00:16:35.105 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:35.105 Command Effects Log Page: Supported 00:16:35.105 Get Log Page Extended Data: Supported 00:16:35.105 Telemetry Log Pages: Not Supported 00:16:35.105 Persistent Event Log Pages: Not Supported 00:16:35.105 Supported Log Pages Log Page: May Support 00:16:35.105 Commands Supported & Effects Log Page: Not Supported 00:16:35.105 Feature Identifiers & Effects Log Page:May Support 00:16:35.105 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.105 Data Area 4 for Telemetry Log: Not Supported 00:16:35.105 Error Log Page Entries Supported: 128 00:16:35.106 Keep Alive: Supported 00:16:35.106 Keep Alive Granularity: 1000 ms 00:16:35.106 00:16:35.106 NVM Command Set Attributes 00:16:35.106 ========================== 00:16:35.106 Submission Queue Entry Size 00:16:35.106 Max: 64 00:16:35.106 Min: 64 00:16:35.106 Completion Queue Entry Size 00:16:35.106 Max: 16 00:16:35.106 Min: 16 00:16:35.106 Number of Namespaces: 1024 00:16:35.106 Compare Command: Not Supported 00:16:35.106 Write Uncorrectable Command: Not Supported 00:16:35.106 Dataset Management Command: Supported 00:16:35.106 Write Zeroes Command: Supported 00:16:35.106 Set Features Save Field: Not Supported 00:16:35.106 Reservations: Not Supported 00:16:35.106 Timestamp: Not Supported 00:16:35.106 Copy: Not Supported 00:16:35.106 Volatile Write Cache: Present 00:16:35.106 Atomic Write Unit (Normal): 1 00:16:35.106 Atomic Write Unit (PFail): 1 00:16:35.106 Atomic Compare & Write Unit: 1 00:16:35.106 Fused Compare & Write: Not Supported 00:16:35.106 Scatter-Gather List 00:16:35.106 SGL Command Set: Supported 00:16:35.106 SGL Keyed: Not Supported 00:16:35.106 SGL Bit Bucket Descriptor: Not Supported 00:16:35.106 SGL Metadata Pointer: Not Supported 00:16:35.106 Oversized SGL: Not Supported 00:16:35.106 SGL Metadata Address: Not Supported 00:16:35.106 SGL Offset: Supported 00:16:35.106 Transport SGL Data Block: Not Supported 00:16:35.106 Replay Protected Memory Block: Not Supported 00:16:35.106 00:16:35.106 Firmware Slot Information 00:16:35.106 ========================= 00:16:35.106 Active slot: 0 00:16:35.106 00:16:35.106 Asymmetric Namespace Access 00:16:35.106 =========================== 00:16:35.106 Change Count : 0 00:16:35.106 Number of ANA Group Descriptors : 1 00:16:35.106 ANA Group Descriptor : 0 00:16:35.106 ANA Group ID : 1 00:16:35.106 Number of NSID Values : 1 00:16:35.106 Change Count : 0 00:16:35.106 ANA State : 1 00:16:35.106 Namespace Identifier : 1 00:16:35.106 00:16:35.106 Commands Supported and Effects 00:16:35.106 ============================== 00:16:35.106 Admin Commands 00:16:35.106 -------------- 00:16:35.106 Get Log Page (02h): Supported 00:16:35.106 Identify (06h): Supported 00:16:35.106 Abort (08h): Supported 00:16:35.106 Set Features (09h): Supported 00:16:35.106 Get Features (0Ah): Supported 00:16:35.106 Asynchronous Event Request (0Ch): Supported 00:16:35.106 Keep Alive (18h): Supported 00:16:35.106 I/O Commands 00:16:35.106 ------------ 00:16:35.106 Flush (00h): Supported 00:16:35.106 Write (01h): Supported LBA-Change 00:16:35.106 Read (02h): Supported 00:16:35.106 Write Zeroes (08h): Supported LBA-Change 00:16:35.106 Dataset Management (09h): Supported 00:16:35.106 00:16:35.106 Error Log 00:16:35.106 ========= 00:16:35.106 Entry: 0 00:16:35.106 Error Count: 0x3 00:16:35.106 Submission Queue Id: 0x0 00:16:35.106 Command Id: 0x5 00:16:35.106 Phase Bit: 0 00:16:35.106 Status Code: 0x2 00:16:35.106 Status Code Type: 0x0 00:16:35.106 Do Not Retry: 1 00:16:35.106 Error Location: 0x28 00:16:35.106 LBA: 0x0 00:16:35.106 Namespace: 0x0 00:16:35.106 Vendor Log Page: 0x0 00:16:35.106 ----------- 00:16:35.106 Entry: 1 00:16:35.106 Error Count: 0x2 00:16:35.106 Submission Queue Id: 0x0 00:16:35.106 Command Id: 0x5 00:16:35.106 Phase Bit: 0 00:16:35.106 Status Code: 0x2 00:16:35.106 Status Code Type: 0x0 00:16:35.106 Do Not Retry: 1 00:16:35.106 Error Location: 0x28 00:16:35.106 LBA: 0x0 00:16:35.106 Namespace: 0x0 00:16:35.106 Vendor Log Page: 0x0 00:16:35.106 ----------- 00:16:35.106 Entry: 2 00:16:35.106 Error Count: 0x1 00:16:35.106 Submission Queue Id: 0x0 00:16:35.106 Command Id: 0x4 00:16:35.106 Phase Bit: 0 00:16:35.106 Status Code: 0x2 00:16:35.106 Status Code Type: 0x0 00:16:35.106 Do Not Retry: 1 00:16:35.106 Error Location: 0x28 00:16:35.106 LBA: 0x0 00:16:35.106 Namespace: 0x0 00:16:35.106 Vendor Log Page: 0x0 00:16:35.106 00:16:35.106 Number of Queues 00:16:35.106 ================ 00:16:35.106 Number of I/O Submission Queues: 128 00:16:35.106 Number of I/O Completion Queues: 128 00:16:35.106 00:16:35.106 ZNS Specific Controller Data 00:16:35.106 ============================ 00:16:35.106 Zone Append Size Limit: 0 00:16:35.106 00:16:35.106 00:16:35.106 Active Namespaces 00:16:35.106 ================= 00:16:35.106 get_feature(0x05) failed 00:16:35.106 Namespace ID:1 00:16:35.106 Command Set Identifier: NVM (00h) 00:16:35.106 Deallocate: Supported 00:16:35.106 Deallocated/Unwritten Error: Not Supported 00:16:35.106 Deallocated Read Value: Unknown 00:16:35.106 Deallocate in Write Zeroes: Not Supported 00:16:35.106 Deallocated Guard Field: 0xFFFF 00:16:35.106 Flush: Supported 00:16:35.106 Reservation: Not Supported 00:16:35.106 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.106 Size (in LBAs): 1310720 (5GiB) 00:16:35.106 Capacity (in LBAs): 1310720 (5GiB) 00:16:35.106 Utilization (in LBAs): 1310720 (5GiB) 00:16:35.106 UUID: 8642fa22-0cc8-4996-bb1b-12de2f23c9ed 00:16:35.106 Thin Provisioning: Not Supported 00:16:35.106 Per-NS Atomic Units: Yes 00:16:35.106 Atomic Boundary Size (Normal): 0 00:16:35.106 Atomic Boundary Size (PFail): 0 00:16:35.106 Atomic Boundary Offset: 0 00:16:35.106 NGUID/EUI64 Never Reused: No 00:16:35.106 ANA group ID: 1 00:16:35.106 Namespace Write Protected: No 00:16:35.106 Number of LBA Formats: 1 00:16:35.106 Current LBA Format: LBA Format #00 00:16:35.106 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:35.106 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.106 rmmod nvme_tcp 00:16:35.106 rmmod nvme_fabrics 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.106 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:35.364 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:35.365 12:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:35.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:35.931 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:36.189 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:36.189 00:16:36.189 real 0m2.865s 00:16:36.189 user 0m0.946s 00:16:36.189 sys 0m1.374s 00:16:36.189 12:58:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.189 12:58:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.189 ************************************ 00:16:36.189 END TEST nvmf_identify_kernel_target 00:16:36.189 ************************************ 00:16:36.189 12:58:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:36.189 12:58:52 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:36.189 12:58:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:36.189 12:58:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.189 12:58:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.189 ************************************ 00:16:36.189 START TEST nvmf_auth_host 00:16:36.189 ************************************ 00:16:36.189 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:36.448 * Looking for test storage... 00:16:36.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.448 12:58:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:36.449 Cannot find device "nvmf_tgt_br" 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.449 Cannot find device "nvmf_tgt_br2" 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:36.449 Cannot find device "nvmf_tgt_br" 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:36.449 Cannot find device "nvmf_tgt_br2" 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.449 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:36.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:36.708 00:16:36.708 --- 10.0.0.2 ping statistics --- 00:16:36.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.708 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:36.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:36.708 00:16:36.708 --- 10.0.0.3 ping statistics --- 00:16:36.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.708 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:36.708 00:16:36.708 --- 10.0.0.1 ping statistics --- 00:16:36.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.708 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78608 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78608 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78608 ']' 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.708 12:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b423e5321956e9e187e4eea57febf00 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dsf 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b423e5321956e9e187e4eea57febf00 0 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b423e5321956e9e187e4eea57febf00 0 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b423e5321956e9e187e4eea57febf00 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dsf 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dsf 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dsf 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=642a59e68353e4bc10eea6923919f45897c1e99790c1151af3e79a57c856b175 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.At9 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 642a59e68353e4bc10eea6923919f45897c1e99790c1151af3e79a57c856b175 3 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 642a59e68353e4bc10eea6923919f45897c1e99790c1151af3e79a57c856b175 3 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=642a59e68353e4bc10eea6923919f45897c1e99790c1151af3e79a57c856b175 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.At9 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.At9 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.At9 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef94c6c4e4f346b5b11ef8a07e82a2efae2bee6b78fff36d 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BVN 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef94c6c4e4f346b5b11ef8a07e82a2efae2bee6b78fff36d 0 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef94c6c4e4f346b5b11ef8a07e82a2efae2bee6b78fff36d 0 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef94c6c4e4f346b5b11ef8a07e82a2efae2bee6b78fff36d 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:38.079 12:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BVN 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BVN 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BVN 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=95f8c96f5500c107b986fe946457d099a12f4a32d03463b2 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wi0 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 95f8c96f5500c107b986fe946457d099a12f4a32d03463b2 2 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 95f8c96f5500c107b986fe946457d099a12f4a32d03463b2 2 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=95f8c96f5500c107b986fe946457d099a12f4a32d03463b2 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wi0 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wi0 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wi0 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e878f66c6410baca4d623d99e10ecb0 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZJc 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e878f66c6410baca4d623d99e10ecb0 1 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e878f66c6410baca4d623d99e10ecb0 1 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e878f66c6410baca4d623d99e10ecb0 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:38.079 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZJc 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZJc 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZJc 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d18cb4c0f41020a45fd365a441e0e4b2 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VjH 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d18cb4c0f41020a45fd365a441e0e4b2 1 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d18cb4c0f41020a45fd365a441e0e4b2 1 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d18cb4c0f41020a45fd365a441e0e4b2 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VjH 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VjH 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VjH 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58b354b188303da1485020ed207f4a3b0b22e2d6386eafb6 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6sh 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58b354b188303da1485020ed207f4a3b0b22e2d6386eafb6 2 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58b354b188303da1485020ed207f4a3b0b22e2d6386eafb6 2 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58b354b188303da1485020ed207f4a3b0b22e2d6386eafb6 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6sh 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6sh 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6sh 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1bd3fdd316224d55e936d87213d2c950 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4rq 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1bd3fdd316224d55e936d87213d2c950 0 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1bd3fdd316224d55e936d87213d2c950 0 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1bd3fdd316224d55e936d87213d2c950 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4rq 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4rq 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4rq 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=35a69bd29300d59e8f159e74959a57c4eb2360843d2ea6ae844082cd791d761c 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KFU 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 35a69bd29300d59e8f159e74959a57c4eb2360843d2ea6ae844082cd791d761c 3 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 35a69bd29300d59e8f159e74959a57c4eb2360843d2ea6ae844082cd791d761c 3 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=35a69bd29300d59e8f159e74959a57c4eb2360843d2ea6ae844082cd791d761c 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:38.336 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KFU 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KFU 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KFU 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78608 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78608 ']' 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.593 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.594 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dsf 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.At9 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.At9 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BVN 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wi0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wi0 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZJc 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VjH ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VjH 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6sh 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4rq ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4rq 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KFU 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:38.851 12:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:39.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.367 Waiting for block devices as requested 00:16:39.367 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:39.367 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:39.931 No valid GPT data, bailing 00:16:39.931 12:58:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:40.189 No valid GPT data, bailing 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:40.189 No valid GPT data, bailing 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:40.189 No valid GPT data, bailing 00:16:40.189 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:40.447 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -a 10.0.0.1 -t tcp -s 4420 00:16:40.447 00:16:40.447 Discovery Log Number of Records 2, Generation counter 2 00:16:40.447 =====Discovery Log Entry 0====== 00:16:40.447 trtype: tcp 00:16:40.447 adrfam: ipv4 00:16:40.447 subtype: current discovery subsystem 00:16:40.447 treq: not specified, sq flow control disable supported 00:16:40.447 portid: 1 00:16:40.447 trsvcid: 4420 00:16:40.447 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:40.447 traddr: 10.0.0.1 00:16:40.447 eflags: none 00:16:40.447 sectype: none 00:16:40.447 =====Discovery Log Entry 1====== 00:16:40.447 trtype: tcp 00:16:40.447 adrfam: ipv4 00:16:40.448 subtype: nvme subsystem 00:16:40.448 treq: not specified, sq flow control disable supported 00:16:40.448 portid: 1 00:16:40.448 trsvcid: 4420 00:16:40.448 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:40.448 traddr: 10.0.0.1 00:16:40.448 eflags: none 00:16:40.448 sectype: none 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.448 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.704 nvme0n1 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.704 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.705 nvme0n1 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.705 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:40.961 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.962 nvme0n1 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.962 12:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.962 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.219 nvme0n1 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.219 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.220 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.478 nvme0n1 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.478 nvme0n1 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.478 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 nvme0n1 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.045 12:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.045 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 nvme0n1 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.303 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.560 nvme0n1 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.560 nvme0n1 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.560 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.818 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.819 nvme0n1 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.819 12:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.383 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 nvme0n1 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.640 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.898 nvme0n1 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.898 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.156 12:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.156 nvme0n1 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.156 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.416 nvme0n1 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.416 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.895 nvme0n1 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.895 12:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.794 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.795 nvme0n1 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.795 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.052 12:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.309 nvme0n1 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.309 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.567 nvme0n1 00:16:47.567 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.567 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.567 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.567 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.567 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.825 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.826 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.826 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.826 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.826 12:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.826 12:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:47.826 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.826 12:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.083 nvme0n1 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.083 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.084 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.379 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.646 nvme0n1 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.646 12:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.214 nvme0n1 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.214 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.782 nvme0n1 00:16:49.782 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.042 12:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.610 nvme0n1 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.610 12:59:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:50.611 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.611 12:59:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.178 nvme0n1 00:16:51.178 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.178 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.178 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.178 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.178 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.178 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.437 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 nvme0n1 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.006 12:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.006 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.006 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.006 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.265 nvme0n1 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.265 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.266 nvme0n1 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.266 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.525 nvme0n1 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.525 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.526 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.803 nvme0n1 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.803 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.804 nvme0n1 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.804 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.126 12:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.126 nvme0n1 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.126 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.127 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.387 nvme0n1 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.387 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.646 nvme0n1 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.646 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.647 nvme0n1 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.647 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.906 nvme0n1 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.906 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.907 12:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.166 nvme0n1 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.166 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 nvme0n1 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:54.425 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.684 nvme0n1 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.684 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.943 nvme0n1 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.943 12:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.201 nvme0n1 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.201 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.460 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.719 nvme0n1 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.719 12:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.286 nvme0n1 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:56.286 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.287 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.546 nvme0n1 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.546 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.114 nvme0n1 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.114 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.115 12:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.115 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.372 nvme0n1 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.373 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.638 12:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.230 nvme0n1 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.230 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.797 nvme0n1 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:16:58.797 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.798 12:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 nvme0n1 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.735 12:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.303 nvme0n1 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.303 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.871 nvme0n1 00:17:00.871 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.872 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.131 nvme0n1 00:17:01.131 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.131 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.131 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.131 12:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.131 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.131 12:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.131 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.132 nvme0n1 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.132 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.391 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.391 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.391 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:01.391 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.391 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.391 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 nvme0n1 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.392 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 nvme0n1 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 nvme0n1 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.651 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.911 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.912 nvme0n1 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.912 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.179 12:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 nvme0n1 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.179 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.438 nvme0n1 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.438 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.439 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.439 nvme0n1 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:02.698 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.699 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.699 nvme0n1 00:17:02.699 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.699 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.699 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.699 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.699 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.699 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:02.957 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.958 nvme0n1 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.958 12:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.217 nvme0n1 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.217 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.477 nvme0n1 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.477 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.735 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.736 nvme0n1 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.736 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.994 12:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.995 12:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.995 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.995 12:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.995 nvme0n1 00:17:03.995 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.995 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.995 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.995 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.995 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.253 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.511 nvme0n1 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.511 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.512 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.078 nvme0n1 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.078 12:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.336 nvme0n1 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.336 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.337 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.902 nvme0n1 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.902 12:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.162 nvme0n1 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmI0MjNlNTMyMTk1NmU5ZTE4N2U0ZWVhNTdmZWJmMDA7N88X: 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQyYTU5ZTY4MzUzZTRiYzEwZWVhNjkyMzkxOWY0NTg5N2MxZTk5NzkwYzExNTFhZjNlNzlhNTdjODU2YjE3NcEJIa8=: 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.162 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.163 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.163 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.738 nvme0n1 00:17:06.738 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.738 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.738 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.738 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.738 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.738 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.996 12:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.562 nvme0n1 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU4NzhmNjZjNjQxMGJhY2E0ZDYyM2Q5OWUxMGVjYjBkW1At: 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: ]] 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDE4Y2I0YzBmNDEwMjBhNDVmZDM2NWE0NDFlMGU0YjI+mRVN: 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.562 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.563 12:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.129 nvme0n1 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThiMzU0YjE4ODMwM2RhMTQ4NTAyMGVkMjA3ZjRhM2IwYjIyZTJkNjM4NmVhZmI2tHpGrw==: 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJkM2ZkZDMxNjIyNGQ1NWU5MzZkODcyMTNkMmM5NTCEdd2r: 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.129 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.064 nvme0n1 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:09.064 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhNjliZDI5MzAwZDU5ZThmMTU5ZTc0OTU5YTU3YzRlYjIzNjA4NDNkMmVhNmFlODQ0MDgyY2Q3OTFkNzYxY2mxezE=: 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.065 12:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.632 nvme0n1 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY5NGM2YzRlNGYzNDZiNWIxMWVmOGEwN2U4MmEyZWZhZTJiZWU2Yjc4ZmZmMzZkrIbWIQ==: 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTVmOGM5NmY1NTAwYzEwN2I5ODZmZTk0NjQ1N2QwOTlhMTJmNGEzMmQwMzQ2M2IyBok4OA==: 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.632 request: 00:17:09.632 { 00:17:09.632 "name": "nvme0", 00:17:09.632 "trtype": "tcp", 00:17:09.632 "traddr": "10.0.0.1", 00:17:09.632 "adrfam": "ipv4", 00:17:09.632 "trsvcid": "4420", 00:17:09.632 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.632 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.632 "prchk_reftag": false, 00:17:09.632 "prchk_guard": false, 00:17:09.632 "hdgst": false, 00:17:09.632 "ddgst": false, 00:17:09.632 "method": "bdev_nvme_attach_controller", 00:17:09.632 "req_id": 1 00:17:09.632 } 00:17:09.632 Got JSON-RPC error response 00:17:09.632 response: 00:17:09.632 { 00:17:09.632 "code": -5, 00:17:09.632 "message": "Input/output error" 00:17:09.632 } 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.632 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.632 request: 00:17:09.632 { 00:17:09.632 "name": "nvme0", 00:17:09.632 "trtype": "tcp", 00:17:09.632 "traddr": "10.0.0.1", 00:17:09.632 "adrfam": "ipv4", 00:17:09.632 "trsvcid": "4420", 00:17:09.632 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.632 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.632 "prchk_reftag": false, 00:17:09.632 "prchk_guard": false, 00:17:09.632 "hdgst": false, 00:17:09.632 "ddgst": false, 00:17:09.632 "dhchap_key": "key2", 00:17:09.632 "method": "bdev_nvme_attach_controller", 00:17:09.632 "req_id": 1 00:17:09.632 } 00:17:09.632 Got JSON-RPC error response 00:17:09.632 response: 00:17:09.632 { 00:17:09.632 "code": -5, 00:17:09.633 "message": "Input/output error" 00:17:09.633 } 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.633 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.891 request: 00:17:09.891 { 00:17:09.891 "name": "nvme0", 00:17:09.891 "trtype": "tcp", 00:17:09.891 "traddr": "10.0.0.1", 00:17:09.891 "adrfam": "ipv4", 00:17:09.891 "trsvcid": "4420", 00:17:09.891 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.891 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.891 "prchk_reftag": false, 00:17:09.891 "prchk_guard": false, 00:17:09.891 "hdgst": false, 00:17:09.891 "ddgst": false, 00:17:09.891 "dhchap_key": "key1", 00:17:09.891 "dhchap_ctrlr_key": "ckey2", 00:17:09.891 "method": "bdev_nvme_attach_controller", 00:17:09.891 "req_id": 1 00:17:09.891 } 00:17:09.891 Got JSON-RPC error response 00:17:09.891 response: 00:17:09.891 { 00:17:09.891 "code": -5, 00:17:09.891 "message": "Input/output error" 00:17:09.891 } 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.891 rmmod nvme_tcp 00:17:09.891 rmmod nvme_fabrics 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78608 ']' 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78608 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78608 ']' 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78608 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78608 00:17:09.891 killing process with pid 78608 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78608' 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78608 00:17:09.891 12:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78608 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:10.149 12:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:10.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.029 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.029 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.029 12:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dsf /tmp/spdk.key-null.BVN /tmp/spdk.key-sha256.ZJc /tmp/spdk.key-sha384.6sh /tmp/spdk.key-sha512.KFU /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:11.029 12:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:11.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.289 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:11.289 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:11.550 00:17:11.550 real 0m35.163s 00:17:11.550 user 0m31.994s 00:17:11.550 sys 0m3.716s 00:17:11.550 12:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:11.550 ************************************ 00:17:11.550 12:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.550 END TEST nvmf_auth_host 00:17:11.550 ************************************ 00:17:11.550 12:59:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:11.550 12:59:27 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:11.550 12:59:27 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:11.550 12:59:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:11.550 12:59:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.550 12:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.550 ************************************ 00:17:11.550 START TEST nvmf_digest 00:17:11.550 ************************************ 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:11.550 * Looking for test storage... 00:17:11.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:11.550 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:11.551 Cannot find device "nvmf_tgt_br" 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.551 Cannot find device "nvmf_tgt_br2" 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:11.551 Cannot find device "nvmf_tgt_br" 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:11.551 Cannot find device "nvmf_tgt_br2" 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:11.551 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.810 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:11.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:11.811 00:17:11.811 --- 10.0.0.2 ping statistics --- 00:17:11.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.811 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:11.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:17:11.811 00:17:11.811 --- 10.0.0.3 ping statistics --- 00:17:11.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.811 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:11.811 00:17:11.811 --- 10.0.0.1 ping statistics --- 00:17:11.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.811 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.811 12:59:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:12.070 ************************************ 00:17:12.070 START TEST nvmf_digest_clean 00:17:12.070 ************************************ 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80179 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80179 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80179 ']' 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.070 12:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.070 [2024-07-15 12:59:27.951074] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:12.070 [2024-07-15 12:59:27.951177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.070 [2024-07-15 12:59:28.093281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.329 [2024-07-15 12:59:28.206189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.329 [2024-07-15 12:59:28.206255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.329 [2024-07-15 12:59:28.206281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.329 [2024-07-15 12:59:28.206292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.329 [2024-07-15 12:59:28.206301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.329 [2024-07-15 12:59:28.206337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.895 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.895 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:12.895 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.895 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.895 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.153 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:13.153 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:13.153 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:13.153 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.153 12:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 [2024-07-15 12:59:29.050454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:13.153 null0 00:17:13.153 [2024-07-15 12:59:29.101116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.153 [2024-07-15 12:59:29.125195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80211 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80211 /var/tmp/bperf.sock 00:17:13.153 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80211 ']' 00:17:13.154 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:13.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:13.154 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.154 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:13.154 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.154 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.154 12:59:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:13.154 [2024-07-15 12:59:29.187651] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:13.154 [2024-07-15 12:59:29.187744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80211 ] 00:17:13.412 [2024-07-15 12:59:29.326884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.412 [2024-07-15 12:59:29.441501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.376 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.376 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:14.376 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:14.376 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:14.376 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:14.634 [2024-07-15 12:59:30.493877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:14.634 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.634 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.893 nvme0n1 00:17:14.893 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:14.893 12:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:15.152 Running I/O for 2 seconds... 00:17:17.055 00:17:17.055 Latency(us) 00:17:17.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.055 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:17.055 nvme0n1 : 2.01 16008.90 62.53 0.00 0.00 7989.84 7179.17 22758.87 00:17:17.055 =================================================================================================================== 00:17:17.055 Total : 16008.90 62.53 0.00 0.00 7989.84 7179.17 22758.87 00:17:17.055 0 00:17:17.055 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:17.055 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:17.055 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:17.055 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:17.055 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:17.055 | select(.opcode=="crc32c") 00:17:17.055 | "\(.module_name) \(.executed)"' 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80211 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80211 ']' 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80211 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80211 00:17:17.313 killing process with pid 80211 00:17:17.313 Received shutdown signal, test time was about 2.000000 seconds 00:17:17.313 00:17:17.313 Latency(us) 00:17:17.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.313 =================================================================================================================== 00:17:17.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80211' 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80211 00:17:17.313 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80211 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80267 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80267 /var/tmp/bperf.sock 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80267 ']' 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:17.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.592 12:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:17.592 [2024-07-15 12:59:33.576479] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:17.593 [2024-07-15 12:59:33.576883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:17:17.593 Zero copy mechanism will not be used. 00:17:17.593 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80267 ] 00:17:17.851 [2024-07-15 12:59:33.709939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.851 [2024-07-15 12:59:33.824878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.788 12:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.788 12:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:18.788 12:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:18.788 12:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:18.788 12:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:18.788 [2024-07-15 12:59:34.818114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:19.048 12:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.048 12:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.307 nvme0n1 00:17:19.307 12:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:19.307 12:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:19.307 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:19.307 Zero copy mechanism will not be used. 00:17:19.307 Running I/O for 2 seconds... 00:17:21.842 00:17:21.842 Latency(us) 00:17:21.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.842 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:21.842 nvme0n1 : 2.00 7940.51 992.56 0.00 0.00 2011.70 1727.77 2919.33 00:17:21.842 =================================================================================================================== 00:17:21.842 Total : 7940.51 992.56 0.00 0.00 2011.70 1727.77 2919.33 00:17:21.842 0 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:21.842 | select(.opcode=="crc32c") 00:17:21.842 | "\(.module_name) \(.executed)"' 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80267 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80267 ']' 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80267 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80267 00:17:21.842 killing process with pid 80267 00:17:21.842 Received shutdown signal, test time was about 2.000000 seconds 00:17:21.842 00:17:21.842 Latency(us) 00:17:21.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.842 =================================================================================================================== 00:17:21.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80267' 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80267 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80267 00:17:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80327 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80327 /var/tmp/bperf.sock 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80327 ']' 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.842 12:59:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.842 [2024-07-15 12:59:37.830503] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:21.842 [2024-07-15 12:59:37.831225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80327 ] 00:17:22.101 [2024-07-15 12:59:37.969533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.101 [2024-07-15 12:59:38.067846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.055 12:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.055 12:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:23.055 12:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:23.055 12:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:23.055 12:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:23.055 [2024-07-15 12:59:39.038319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:23.055 12:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.055 12:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.622 nvme0n1 00:17:23.622 12:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:23.623 12:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:23.623 Running I/O for 2 seconds... 00:17:25.523 00:17:25.523 Latency(us) 00:17:25.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.523 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.523 nvme0n1 : 2.01 17026.35 66.51 0.00 0.00 7511.48 4647.10 15966.95 00:17:25.523 =================================================================================================================== 00:17:25.523 Total : 17026.35 66.51 0.00 0.00 7511.48 4647.10 15966.95 00:17:25.523 0 00:17:25.523 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:25.523 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:25.523 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:25.523 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:25.523 | select(.opcode=="crc32c") 00:17:25.523 | "\(.module_name) \(.executed)"' 00:17:25.523 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80327 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80327 ']' 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80327 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80327 00:17:25.781 killing process with pid 80327 00:17:25.781 Received shutdown signal, test time was about 2.000000 seconds 00:17:25.781 00:17:25.781 Latency(us) 00:17:25.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.781 =================================================================================================================== 00:17:25.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80327' 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80327 00:17:25.781 12:59:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80327 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80382 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80382 /var/tmp/bperf.sock 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80382 ']' 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:26.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.039 12:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.297 [2024-07-15 12:59:42.100556] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:26.297 [2024-07-15 12:59:42.101497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:26.297 Zero copy mechanism will not be used. 00:17:26.297 llocations --file-prefix=spdk_pid80382 ] 00:17:26.297 [2024-07-15 12:59:42.238668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.297 [2024-07-15 12:59:42.330208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.232 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.232 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:27.232 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:27.232 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:27.232 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:27.232 [2024-07-15 12:59:43.289127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:27.490 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.490 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.748 nvme0n1 00:17:27.748 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:27.748 12:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:27.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:27.748 Zero copy mechanism will not be used. 00:17:27.748 Running I/O for 2 seconds... 00:17:30.277 00:17:30.277 Latency(us) 00:17:30.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.277 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:30.277 nvme0n1 : 2.00 7096.68 887.08 0.00 0.00 2249.04 1690.53 5600.35 00:17:30.277 =================================================================================================================== 00:17:30.277 Total : 7096.68 887.08 0.00 0.00 2249.04 1690.53 5600.35 00:17:30.277 0 00:17:30.277 12:59:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:30.277 12:59:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:30.277 12:59:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:30.277 12:59:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:30.277 12:59:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:30.277 | select(.opcode=="crc32c") 00:17:30.277 | "\(.module_name) \(.executed)"' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80382 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80382 ']' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80382 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80382 00:17:30.277 killing process with pid 80382 00:17:30.277 Received shutdown signal, test time was about 2.000000 seconds 00:17:30.277 00:17:30.277 Latency(us) 00:17:30.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.277 =================================================================================================================== 00:17:30.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80382' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80382 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80382 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80179 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80179 ']' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80179 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80179 00:17:30.277 killing process with pid 80179 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80179' 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80179 00:17:30.277 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80179 00:17:30.535 00:17:30.535 real 0m18.612s 00:17:30.535 user 0m36.047s 00:17:30.535 sys 0m4.652s 00:17:30.535 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.535 ************************************ 00:17:30.535 END TEST nvmf_digest_clean 00:17:30.535 ************************************ 00:17:30.535 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.535 12:59:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:30.535 12:59:46 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:30.535 12:59:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:30.535 12:59:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.535 12:59:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:30.536 ************************************ 00:17:30.536 START TEST nvmf_digest_error 00:17:30.536 ************************************ 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80471 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80471 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80471 ']' 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.536 12:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.794 [2024-07-15 12:59:46.617091] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:30.794 [2024-07-15 12:59:46.617195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.794 [2024-07-15 12:59:46.757241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.794 [2024-07-15 12:59:46.829411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.794 [2024-07-15 12:59:46.829501] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.794 [2024-07-15 12:59:46.829528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.794 [2024-07-15 12:59:46.829550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.794 [2024-07-15 12:59:46.829557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.794 [2024-07-15 12:59:46.829580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 [2024-07-15 12:59:47.574056] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 [2024-07-15 12:59:47.634800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:31.768 null0 00:17:31.768 [2024-07-15 12:59:47.680743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.768 [2024-07-15 12:59:47.704825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80503 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80503 /var/tmp/bperf.sock 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80503 ']' 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.768 12:59:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 [2024-07-15 12:59:47.767677] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:31.768 [2024-07-15 12:59:47.767771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80503 ] 00:17:32.027 [2024-07-15 12:59:47.904827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.027 [2024-07-15 12:59:48.003558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.027 [2024-07-15 12:59:48.059251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:32.962 12:59:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.220 nvme0n1 00:17:33.220 12:59:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:33.220 12:59:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.220 12:59:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:33.220 12:59:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.220 12:59:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:33.220 12:59:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:33.478 Running I/O for 2 seconds... 00:17:33.478 [2024-07-15 12:59:49.366292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.478 [2024-07-15 12:59:49.366338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.478 [2024-07-15 12:59:49.366369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.382316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.382351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.382423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.397731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.397765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.397794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.413018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.413051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.413080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.428119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.428152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.428181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.443430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.443463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.443491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.458280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.458314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.458343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.473427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.473459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.473487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.488966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.488999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.489028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.504092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.504123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.504152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.519250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.519304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.519333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.479 [2024-07-15 12:59:49.534426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.479 [2024-07-15 12:59:49.534479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.479 [2024-07-15 12:59:49.534507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.549681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.549735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.549764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.564850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.564903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.564932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.579526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.579577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.579605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.594205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.594257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.594285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.608732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.608785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.608814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.623356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.623418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.623446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.637956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.638008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.652507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.652558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.652586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.667065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.667117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.667145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.681766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.681833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.681860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.696395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.696448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.696476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.710949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.711001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.711029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.725672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.725724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.725752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.740219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.740271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.740300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.754855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.754904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.754932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.738 [2024-07-15 12:59:49.769693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.738 [2024-07-15 12:59:49.769752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.738 [2024-07-15 12:59:49.769797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.739 [2024-07-15 12:59:49.787218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.739 [2024-07-15 12:59:49.787257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.739 [2024-07-15 12:59:49.787270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.804309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.804372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.804387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.820108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.820161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.820189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.836263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.836317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.836346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.853068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.853121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.853149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.869285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.869323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.869352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.885006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.885060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.885088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.900519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.900600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.916446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.916498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.916526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.931211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.931264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.931292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.946464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.946518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.946548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.961345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.961404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.961433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.976158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.976211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.976241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:49.991282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:49.991335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:49.991363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:50.008270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:50.008310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:50.008324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:50.026097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:50.026152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:50.026181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.998 [2024-07-15 12:59:50.043638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:33.998 [2024-07-15 12:59:50.043693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.998 [2024-07-15 12:59:50.043722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.060787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.257 [2024-07-15 12:59:50.060843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.257 [2024-07-15 12:59:50.060857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.077123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.257 [2024-07-15 12:59:50.077176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.257 [2024-07-15 12:59:50.077189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.093714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.257 [2024-07-15 12:59:50.093767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.257 [2024-07-15 12:59:50.093795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.109155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.257 [2024-07-15 12:59:50.109208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.257 [2024-07-15 12:59:50.109237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.126016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.257 [2024-07-15 12:59:50.126069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.257 [2024-07-15 12:59:50.126097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.141626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.257 [2024-07-15 12:59:50.141681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.257 [2024-07-15 12:59:50.141711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.158092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.257 [2024-07-15 12:59:50.158144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.257 [2024-07-15 12:59:50.158188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.257 [2024-07-15 12:59:50.173507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.173576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.173605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.191040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.191102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.191154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.209616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.209672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.209685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.227099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.227167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.227181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.244497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.244577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.244618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.260435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.260500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.260529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.277503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.277539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.277553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.294975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.295013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.295042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.258 [2024-07-15 12:59:50.312923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.258 [2024-07-15 12:59:50.312986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.258 [2024-07-15 12:59:50.313015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.516 [2024-07-15 12:59:50.330891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.516 [2024-07-15 12:59:50.330947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.516 [2024-07-15 12:59:50.330977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.516 [2024-07-15 12:59:50.348893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.516 [2024-07-15 12:59:50.348936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.516 [2024-07-15 12:59:50.348950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.516 [2024-07-15 12:59:50.374590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.516 [2024-07-15 12:59:50.374644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.374672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.390317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.390400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.390413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.405519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.405616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.420638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.420703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.420732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.435693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.435746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.435773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.450766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.450819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.450847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.465809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.465860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.465888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.481220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.481273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.481301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.498681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.498734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.498762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.514743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.514822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.514850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.529826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.529879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.529908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.545102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.545154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.545182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.517 [2024-07-15 12:59:50.561165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.517 [2024-07-15 12:59:50.561219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.517 [2024-07-15 12:59:50.561247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.578268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.578321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.578349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.595658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.595714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.595742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.611997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.612056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.612070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.629701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.629755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.629783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.646646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.646700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.646729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.662345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.662421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.662450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.677975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.678027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.678056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.693785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.693838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.776 [2024-07-15 12:59:50.693866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.776 [2024-07-15 12:59:50.709310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.776 [2024-07-15 12:59:50.709386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.709400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.777 [2024-07-15 12:59:50.724836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.777 [2024-07-15 12:59:50.724891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.724920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.777 [2024-07-15 12:59:50.740751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.777 [2024-07-15 12:59:50.740807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.740820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.777 [2024-07-15 12:59:50.756308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.777 [2024-07-15 12:59:50.756384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.756398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.777 [2024-07-15 12:59:50.773126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.777 [2024-07-15 12:59:50.773194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.773223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.777 [2024-07-15 12:59:50.790359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.777 [2024-07-15 12:59:50.790406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.790420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.777 [2024-07-15 12:59:50.808228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.777 [2024-07-15 12:59:50.808274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.808287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.777 [2024-07-15 12:59:50.825266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:34.777 [2024-07-15 12:59:50.825320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.777 [2024-07-15 12:59:50.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.035 [2024-07-15 12:59:50.841999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.842055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.842084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.857586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.857639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.857667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.872492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.872545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.872573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.887353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.887414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.887442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.902503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.902558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.902601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.920261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.920305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.920318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.937576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.937630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.937657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.953852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.953906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.953934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.969601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.969669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.969697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:50.987915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:50.987971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:50.987985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:51.006057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:51.006100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:51.006114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:51.023901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:51.023955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:51.023969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:51.041978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:51.042035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:51.042049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:51.059584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:51.059624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:51.059638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.036 [2024-07-15 12:59:51.077652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.036 [2024-07-15 12:59:51.077706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.036 [2024-07-15 12:59:51.077736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.096321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.096398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.096413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.115008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.115051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.115065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.133702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.133757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.133772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.151839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.151892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.151921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.169071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.169123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.169151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.185596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.185632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.185660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.202137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.202205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.202234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.218013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.218049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.218077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.232914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.232982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.233009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.247942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.247978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.248005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.262954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.262990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.263018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.277984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.278019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.278047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.294552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.294589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.294618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.311895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.311932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.311959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.328293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.328347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.328400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 [2024-07-15 12:59:51.344056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bb020) 00:17:35.296 [2024-07-15 12:59:51.344110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.296 [2024-07-15 12:59:51.344139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.296 00:17:35.296 Latency(us) 00:17:35.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.296 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:35.296 nvme0n1 : 2.01 15602.23 60.95 0.00 0.00 8197.02 7030.23 32172.22 00:17:35.296 =================================================================================================================== 00:17:35.296 Total : 15602.23 60.95 0.00 0.00 8197.02 7030.23 32172.22 00:17:35.296 0 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:35.555 | .driver_specific 00:17:35.555 | .nvme_error 00:17:35.555 | .status_code 00:17:35.555 | .command_transient_transport_error' 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80503 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80503 ']' 00:17:35.555 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80503 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80503 00:17:35.814 killing process with pid 80503 00:17:35.814 Received shutdown signal, test time was about 2.000000 seconds 00:17:35.814 00:17:35.814 Latency(us) 00:17:35.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.814 =================================================================================================================== 00:17:35.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80503' 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80503 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80503 00:17:35.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80562 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80562 /var/tmp/bperf.sock 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80562 ']' 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.814 12:59:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.073 [2024-07-15 12:59:51.917654] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:36.073 [2024-07-15 12:59:51.917792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80562 ] 00:17:36.073 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:36.073 Zero copy mechanism will not be used. 00:17:36.073 [2024-07-15 12:59:52.056353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.331 [2024-07-15 12:59:52.150635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.331 [2024-07-15 12:59:52.203246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:36.897 12:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.897 12:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:36.898 12:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.898 12:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.156 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:37.156 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.156 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.156 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.156 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.156 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.413 nvme0n1 00:17:37.413 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:37.413 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.413 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.413 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.413 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:37.413 12:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:37.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:37.672 Zero copy mechanism will not be used. 00:17:37.672 Running I/O for 2 seconds... 00:17:37.672 [2024-07-15 12:59:53.494437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.494503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.494534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.498575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.498629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.498642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.502669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.502720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.502763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.506605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.506658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.506686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.510484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.510536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.510579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.514426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.514477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.514506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.518321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.518400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.518415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.522338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.522402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.522432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.526310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.526389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.526402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.530131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.530200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.530227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.534129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.534201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.534229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.672 [2024-07-15 12:59:53.538040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.672 [2024-07-15 12:59:53.538091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.672 [2024-07-15 12:59:53.538119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.541967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.542020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.542048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.545894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.545946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.545974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.549896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.549948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.549976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.553896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.553948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.553977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.557780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.557832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.557860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.561692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.561744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.561771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.565566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.565616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.565644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.569412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.569462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.569489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.573235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.573287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.573315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.577177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.577229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.577256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.581163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.581215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.581243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.585180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.585233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.585261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.589041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.589094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.589121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.593074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.593126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.593154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.597030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.597082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.597109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.601143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.601195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.601224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.605114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.605165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.605193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.609170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.609221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.609249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.613135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.613186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.613213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.617044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.617096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.617124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.621198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.621250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.621278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.625183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.625235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.625262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.629215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.629267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.629295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.633139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.633191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.633218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.637091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.637142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.637170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.641187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.641238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.641266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.645179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.645230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.645257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.649203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.649256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.649284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.653152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.653204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.653232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.657112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.657163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.657191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.661113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.661165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.661192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.665133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.665186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.665213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.669010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.669062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.669089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.673 [2024-07-15 12:59:53.672987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.673 [2024-07-15 12:59:53.673040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.673 [2024-07-15 12:59:53.673067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.677001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.677053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.677082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.680931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.681033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.681060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.684924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.685008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.685035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.688865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.688903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.688931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.692818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.692872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.692900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.696812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.696866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.696895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.700917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.700985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.700997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.704914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.704998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.705027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.709008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.709060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.709088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.712861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.712914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.712943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.716771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.716826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.716838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.720774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.720827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.720855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.724743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.724797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.724827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.674 [2024-07-15 12:59:53.728917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.674 [2024-07-15 12:59:53.729004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.674 [2024-07-15 12:59:53.729031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.733016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.733067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.733096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.737281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.737335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.737363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.741446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.741499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.741527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.745293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.745346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.745383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.749168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.749221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.749248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.753025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.753076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.753103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.756923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.757022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.757049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.760903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.760975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.761002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.764803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.764857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.764885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.768775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.768813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.768842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.772759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.772814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.772843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.776798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.776853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.776882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.780696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.780748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.780776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.784595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.784683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.784713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.788421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.788467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.788495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.792375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.792422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.796211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.796267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.796295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.800143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.800192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.800219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.804069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.804118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.804145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.808013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.808062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.808089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.811924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.811974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.812001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.815801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.934 [2024-07-15 12:59:53.815852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.934 [2024-07-15 12:59:53.815879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.934 [2024-07-15 12:59:53.819641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.819692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.823472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.823522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.823550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.827343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.827421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.827456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.831317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.831394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.831407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.835274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.835326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.835354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.839285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.839337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.839364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.843178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.843230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.843258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.847123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.847174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.847202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.851167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.851219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.851246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.855113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.855164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.855191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.859078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.859130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.859157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.863213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.863251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.863279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.867475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.867526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.867553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.871896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.871949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.871978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.876334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.876386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.876416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.880901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.880943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.880956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.885325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.885376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.885390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.889707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.889758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.889786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.893866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.893918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.893946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.898092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.898145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.898173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.902235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.902289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.902317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.906241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.906293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.906321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.910167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.910220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.910248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.914066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.914118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.914146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.918033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.918086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.918114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.921972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.922024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.922052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.925961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.926014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.926042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.929834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.929887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.929914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.935 [2024-07-15 12:59:53.933694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.935 [2024-07-15 12:59:53.933746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.935 [2024-07-15 12:59:53.933773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.937497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.937563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.937591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.941258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.941310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.941338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.945187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.945239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.945267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.949268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.949323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.949351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.953193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.953245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.953273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.957194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.957247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.957275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.961208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.961261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.961289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.965211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.965264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.965292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.969190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.969243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.969271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.973222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.973276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.973304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.977119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.977183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.977211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.981057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.981109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.981136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.985050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.985103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.985131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.936 [2024-07-15 12:59:53.989007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:37.936 [2024-07-15 12:59:53.989059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.936 [2024-07-15 12:59:53.989086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.196 [2024-07-15 12:59:53.992933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.196 [2024-07-15 12:59:53.993041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.196 [2024-07-15 12:59:53.993069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.196 [2024-07-15 12:59:53.996874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.196 [2024-07-15 12:59:53.996929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:53.996953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.000811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.000862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.000890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.004737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.004790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.004819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.008568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.008639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.008668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.012418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.012466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.012493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.016389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.016437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.016464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.020221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.020270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.020298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.024117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.024169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.024197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.028032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.028084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.028112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.031922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.031974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.032002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.035855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.035907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.035935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.039699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.039750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.039779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.043639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.043691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.043718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.047528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.047580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.047608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.051520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.051570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.051597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.055440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.055493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.055521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.059350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.059413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.059441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.063289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.063341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.063368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.067233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.067285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.067312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.071195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.071247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.071274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.075108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.075160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.075187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.079065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.079116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.079145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.083020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.083071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.083099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.087119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.087172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.087199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.091053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.091105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.091132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.095036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.095089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.095116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.099055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.099107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.099135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.103106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.103158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.103186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.107147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.107199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.197 [2024-07-15 12:59:54.107227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.197 [2024-07-15 12:59:54.111122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.197 [2024-07-15 12:59:54.111174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.111202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.115195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.115248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.115275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.119267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.119319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.119347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.123180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.123232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.123259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.127111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.127164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.127191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.131092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.131143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.131171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.135021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.135074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.135102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.139050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.139105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.139133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.143139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.143193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.143220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.147152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.147205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.147233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.151193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.151247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.151274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.155283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.155338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.155365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.159220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.159273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.159301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.163164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.163216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.163243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.167155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.167209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.167237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.171139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.171192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.171220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.175150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.175203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.175231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.179099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.179181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.183115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.183167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.183194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.187197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.187252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.187279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.191190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.191243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.191271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.195327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.195421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.195435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.199590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.199642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.199672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.203911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.203967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.203994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.207933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.207987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.208014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.212240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.212294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.212323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.216444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.216495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.216525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.220610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.220679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.220692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.224870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.224940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.198 [2024-07-15 12:59:54.224952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.198 [2024-07-15 12:59:54.228929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.198 [2024-07-15 12:59:54.228997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.199 [2024-07-15 12:59:54.229025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.199 [2024-07-15 12:59:54.232840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.199 [2024-07-15 12:59:54.232895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.199 [2024-07-15 12:59:54.232939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.199 [2024-07-15 12:59:54.236936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.199 [2024-07-15 12:59:54.237019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.199 [2024-07-15 12:59:54.237047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.199 [2024-07-15 12:59:54.241141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.199 [2024-07-15 12:59:54.241210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.199 [2024-07-15 12:59:54.241238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.199 [2024-07-15 12:59:54.245300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.199 [2024-07-15 12:59:54.245355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.199 [2024-07-15 12:59:54.245394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.199 [2024-07-15 12:59:54.249214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.199 [2024-07-15 12:59:54.249267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.199 [2024-07-15 12:59:54.249296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.199 [2024-07-15 12:59:54.253266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.199 [2024-07-15 12:59:54.253319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.199 [2024-07-15 12:59:54.253347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.257169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.257221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.257248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.261086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.261138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.261166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.265122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.265175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.265204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.269145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.269210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.269238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.273070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.273123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.273151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.276960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.277014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.277042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.281008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.281062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.281090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.285022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.285076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.285103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.289050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.289103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.289130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.292983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.293035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.293062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.296887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.296953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.296981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.301200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.301253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.301281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.305140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.305205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.305233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.309063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.309115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.309142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.313168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.313221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.313249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.316989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.317041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.317069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.321133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.321202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.321230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.325113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.325177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.325205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.329231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.329284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.329312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.333224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.333279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.333307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.337430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.337481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.337494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.341479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.341531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.341543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.345395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.345457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.345486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.349511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.349580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.349610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.353958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.354011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.354039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.358359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.358455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.358469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.362760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.460 [2024-07-15 12:59:54.362796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.460 [2024-07-15 12:59:54.362825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.460 [2024-07-15 12:59:54.367366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.367446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.367459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.371668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.371706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.371719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.375975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.376041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.376068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.380116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.380167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.380194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.384257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.384308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.384336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.388418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.388482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.388512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.392676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.392713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.392726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.396877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.396964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.396977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.401083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.401136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.401165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.405405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.405470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.405514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.409824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.409878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.409908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.414186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.414227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.414240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.418649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.418702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.418730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.423017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.423070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.423099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.427338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.427403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.427433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.431695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.431747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.431775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.435865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.435919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.435947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.439900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.439952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.439980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.444105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.444173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.444203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.448271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.448326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.448354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.452441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.452493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.452521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.456563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.456612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.456665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.460522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.460573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.460601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.464516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.464566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.464594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.468677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.468715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.468728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.472572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.472644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.472674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.476502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.476551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.476579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.480437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.480488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.480516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.461 [2024-07-15 12:59:54.484428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.461 [2024-07-15 12:59:54.484477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.461 [2024-07-15 12:59:54.484505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.488267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.488316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.488344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.492161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.492211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.492240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.496325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.496417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.496431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.500324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.500398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.500411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.504279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.504328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.504356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.508516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.508569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.508582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.512557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.512607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.512659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.462 [2024-07-15 12:59:54.516429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.462 [2024-07-15 12:59:54.516478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.462 [2024-07-15 12:59:54.516506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.722 [2024-07-15 12:59:54.520653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.722 [2024-07-15 12:59:54.520689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.722 [2024-07-15 12:59:54.520719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.722 [2024-07-15 12:59:54.524646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.722 [2024-07-15 12:59:54.524686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.722 [2024-07-15 12:59:54.524714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.722 [2024-07-15 12:59:54.528461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.722 [2024-07-15 12:59:54.528510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.722 [2024-07-15 12:59:54.528537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.722 [2024-07-15 12:59:54.532660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.722 [2024-07-15 12:59:54.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.722 [2024-07-15 12:59:54.532710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.722 [2024-07-15 12:59:54.536589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.722 [2024-07-15 12:59:54.536682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.722 [2024-07-15 12:59:54.536711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.722 [2024-07-15 12:59:54.540665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.722 [2024-07-15 12:59:54.540715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.722 [2024-07-15 12:59:54.540744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.722 [2024-07-15 12:59:54.544907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.722 [2024-07-15 12:59:54.544947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.544975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.548833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.548889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.548902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.553229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.553267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.553295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.557401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.557450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.557478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.561629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.561678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.561705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.565876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.565929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.565956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.570077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.570129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.570157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.574199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.574253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.574281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.578285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.578338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.578367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.582261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.582314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.582341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.586295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.586347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.586385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.590169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.590221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.590249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.594100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.594152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.594195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.598020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.598071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.598099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.602090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.602142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.602170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.606084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.606136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.606181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.610105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.610173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.610201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.614112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.614180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.614209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.618133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.618202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.618230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.622018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.622070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.622098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.625926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.625979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.626007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.629864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.629916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.629944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.633738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.633790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.633818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.637659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.637710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.637738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.641554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.641604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.641632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.645540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.645591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.645619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.649492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.649544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.649571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.653515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.653566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.653593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.657483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.657534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.723 [2024-07-15 12:59:54.657562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.723 [2024-07-15 12:59:54.661310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.723 [2024-07-15 12:59:54.661388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.661401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.665291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.665343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.665382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.669212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.669264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.669291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.673180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.673232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.673260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.677146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.677199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.677227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.681053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.681104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.681133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.684964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.685016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.685044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.688779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.688818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.688846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.692579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.692651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.692680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.696328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.696418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.696431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.700128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.700176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.700203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.704091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.704142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.704169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.707861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.707913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.707941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.711753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.711805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.711833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.715702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.715754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.715782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.719542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.719593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.719621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.723402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.723453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.723480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.727259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.727311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.727338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.731286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.731338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.731366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.735282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.735333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.735361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.739160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.739212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.739239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.743096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.743148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.743176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.747010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.747062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.747089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.750918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.750970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.750998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.754812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.754864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.754892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.758768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.758820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.758847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.762663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.762716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.762743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.766712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.766764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.766792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.770597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.770647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.724 [2024-07-15 12:59:54.770675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.724 [2024-07-15 12:59:54.774462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.724 [2024-07-15 12:59:54.774514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.725 [2024-07-15 12:59:54.774542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.725 [2024-07-15 12:59:54.778458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.725 [2024-07-15 12:59:54.778520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.725 [2024-07-15 12:59:54.778550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.985 [2024-07-15 12:59:54.782492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.985 [2024-07-15 12:59:54.782558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.985 [2024-07-15 12:59:54.782586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.985 [2024-07-15 12:59:54.786354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.985 [2024-07-15 12:59:54.786414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.985 [2024-07-15 12:59:54.786442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.985 [2024-07-15 12:59:54.790236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.985 [2024-07-15 12:59:54.790289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.985 [2024-07-15 12:59:54.790318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.985 [2024-07-15 12:59:54.794217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.985 [2024-07-15 12:59:54.794269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.985 [2024-07-15 12:59:54.794297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.985 [2024-07-15 12:59:54.798170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.985 [2024-07-15 12:59:54.798223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.985 [2024-07-15 12:59:54.798251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.985 [2024-07-15 12:59:54.802144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.985 [2024-07-15 12:59:54.802213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.802241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.806097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.806149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.806194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.810005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.810059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.810086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.813912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.813964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.813992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.818172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.818226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.818254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.822355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.822420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.822433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.826440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.826493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.826521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.830572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.830624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.830653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.834743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.834797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.834825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.838883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.838937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.838966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.842942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.842995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.843024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.846936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.847015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.850866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.850918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.850945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.854797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.854860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.854888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.858649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.858699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.858726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.862361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.862422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.862450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.866264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.866317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.866344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.870345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.870406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.870435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.874340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.874401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.874429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.878282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.878338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.878365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.882125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.882178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.882207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.886502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.886571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.886613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.986 [2024-07-15 12:59:54.890820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.986 [2024-07-15 12:59:54.890858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.986 [2024-07-15 12:59:54.890886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.895023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.895075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.895103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.899588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.899639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.899667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.904049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.904087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.904115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.908143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.908215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.908228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.912425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.912474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.912488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.916732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.916772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.916785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.921037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.921088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.921115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.925230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.925284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.925312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.929241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.929291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.929319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.933136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.933201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.933229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.937109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.937176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.937205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.941111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.941178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.941206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.945172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.945224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.945252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.949219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.949269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.949297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.953254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.953306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.953334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.957189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.957241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.957268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.961097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.961148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.961175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.965020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.965071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.965099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.968960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.969028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.969055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.972997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.973049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.973077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.976801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.976855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.976883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.987 [2024-07-15 12:59:54.980723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.987 [2024-07-15 12:59:54.980777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.987 [2024-07-15 12:59:54.980806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:54.984609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:54.984683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:54.984712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:54.988472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:54.988522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:54.988549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:54.992407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:54.992464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:54.992492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:54.996291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:54.996340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:54.996367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.000194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.000245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.000273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.004117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.004167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.004194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.008026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.008076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.008103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.012003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.012054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.015882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.015931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.015959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.019730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.019783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.019810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.023559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.023609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.023637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.027458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.027510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.027536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.031320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.031396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.031409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.035178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.035230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.035258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.039105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.039157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.039185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.988 [2024-07-15 12:59:55.043089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:38.988 [2024-07-15 12:59:55.043141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.988 [2024-07-15 12:59:55.043169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.047107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.047160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.047188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.051050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.051104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.051132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.055040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.055092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.055120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.058966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.059019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.059046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.062949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.063001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.063029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.066864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.066917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.066945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.070805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.070858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.070886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.074755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.074807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.074834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.078690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.078743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.078771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.082612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.082663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.082691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.086504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.086556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.086583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.090423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.090481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.090521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.094533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.094585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.094613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.098445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.098498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.098525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.102333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.102395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.102424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.106317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.106392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.249 [2024-07-15 12:59:55.106404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.249 [2024-07-15 12:59:55.110230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.249 [2024-07-15 12:59:55.110282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.110311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.114077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.114127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.114155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.117983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.118035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.118062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.121806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.121857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.121884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.125648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.125699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.125726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.129524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.129590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.129617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.133344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.133408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.133436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.137177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.137228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.137256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.141057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.141108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.144889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.144957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.145000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.148737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.148774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.148802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.152720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.152772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.152801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.156520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.156567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.156595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.160255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.160302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.160329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.164019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.164068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.164095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.167863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.167923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.167950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.171749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.171799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.171826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.175584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.175634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.175661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.179383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.179433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.179461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.183258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.183310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.183337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.187170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.187221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.187250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.191077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.191129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.191157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.194979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.195029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.195057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.198799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.198850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.198877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.202649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.202715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.202744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.206516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.206568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.206596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.210321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.210397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.210410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.214146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.214198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.214226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.250 [2024-07-15 12:59:55.218098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.250 [2024-07-15 12:59:55.218149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.250 [2024-07-15 12:59:55.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.221950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.222001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.222029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.225785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.225836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.225864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.229718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.229769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.229797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.233559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.233610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.233638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.237403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.237466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.237494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.241277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.241330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.241358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.245176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.245228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.245257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.249129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.249197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.249224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.252975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.253027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.253055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.256748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.256800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.256827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.260591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.260672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.260686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.265174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.265231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.265260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.269362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.269426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.269454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.273147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.273211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.273240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.277080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.277132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.277175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.280992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.281060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.281087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.284818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.284873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.284900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.288864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.288904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.288947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.292742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.292793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.292821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.296726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.296779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.296808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.300828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.300885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.300910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.251 [2024-07-15 12:59:55.304880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.251 [2024-07-15 12:59:55.304933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.251 [2024-07-15 12:59:55.304960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.308740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.308794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.308822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.312936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.313006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.313019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.317544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.317610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.317637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.321547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.321598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.321625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.325568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.325620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.325647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.329397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.329461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.329489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.333199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.333250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.333277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.337465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.337531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.337574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.341784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.341821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.341848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.345950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.345987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.346014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.350002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.350040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.350066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.354116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.354173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.354201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.358324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.358389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.358403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.362359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.362406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.362434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.512 [2024-07-15 12:59:55.366388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.512 [2024-07-15 12:59:55.366451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.512 [2024-07-15 12:59:55.366464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.370648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.370692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.370707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.374819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.374859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.374873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.379078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.379134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.379163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.383521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.383561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.383574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.387756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.387793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.387822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.392263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.392302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.392315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.396687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.396725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.396738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.400975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.401038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.401066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.405362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.405414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.405428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.409884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.409924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.409937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.414529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.414579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.414593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.419057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.419116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.419161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.423592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.423662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.423691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.427995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.428036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.428050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.432263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.432302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.432315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.436453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.436491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.436504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.440792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.440847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.445251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.445307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.445321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.449489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.449529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.449543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.453969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.513 [2024-07-15 12:59:55.454012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.513 [2024-07-15 12:59:55.454026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.513 [2024-07-15 12:59:55.458124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.458165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.458179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.514 [2024-07-15 12:59:55.462309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.462348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.462378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.514 [2024-07-15 12:59:55.466760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.466797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.466841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.514 [2024-07-15 12:59:55.470893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.470946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.470974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.514 [2024-07-15 12:59:55.474905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.474958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.474986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.514 [2024-07-15 12:59:55.478821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.478876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.478904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.514 [2024-07-15 12:59:55.482923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.482974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.483001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.514 [2024-07-15 12:59:55.487002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc0ac0) 00:17:39.514 [2024-07-15 12:59:55.487056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.514 [2024-07-15 12:59:55.487084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.514 00:17:39.514 Latency(us) 00:17:39.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.514 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:39.514 nvme0n1 : 2.00 7710.15 963.77 0.00 0.00 2072.01 1653.29 4766.25 00:17:39.514 =================================================================================================================== 00:17:39.514 Total : 7710.15 963.77 0.00 0.00 2072.01 1653.29 4766.25 00:17:39.514 0 00:17:39.514 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:39.514 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:39.514 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:39.514 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:39.514 | .driver_specific 00:17:39.514 | .nvme_error 00:17:39.514 | .status_code 00:17:39.514 | .command_transient_transport_error' 00:17:39.773 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 497 > 0 )) 00:17:39.773 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80562 00:17:39.773 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80562 ']' 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80562 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80562 00:17:39.774 killing process with pid 80562 00:17:39.774 Received shutdown signal, test time was about 2.000000 seconds 00:17:39.774 00:17:39.774 Latency(us) 00:17:39.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.774 =================================================================================================================== 00:17:39.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80562' 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80562 00:17:39.774 12:59:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80562 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80618 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80618 /var/tmp/bperf.sock 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80618 ']' 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:40.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.033 12:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.033 [2024-07-15 12:59:56.063951] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:40.033 [2024-07-15 12:59:56.064656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80618 ] 00:17:40.292 [2024-07-15 12:59:56.201285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.292 [2024-07-15 12:59:56.282284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.292 [2024-07-15 12:59:56.334311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.228 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.487 nvme0n1 00:17:41.746 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:41.746 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.746 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.746 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.746 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:41.746 12:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:41.746 Running I/O for 2 seconds... 00:17:41.746 [2024-07-15 12:59:57.686042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fef90 00:17:41.746 [2024-07-15 12:59:57.688866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.746 [2024-07-15 12:59:57.688913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.746 [2024-07-15 12:59:57.703963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190feb58 00:17:41.746 [2024-07-15 12:59:57.706709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.746 [2024-07-15 12:59:57.706780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:41.746 [2024-07-15 12:59:57.721796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fe2e8 00:17:41.746 [2024-07-15 12:59:57.724391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.746 [2024-07-15 12:59:57.724440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:41.746 [2024-07-15 12:59:57.739229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fda78 00:17:41.746 [2024-07-15 12:59:57.741845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.746 [2024-07-15 12:59:57.741900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:41.746 [2024-07-15 12:59:57.756599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fd208 00:17:41.746 [2024-07-15 12:59:57.759323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.746 [2024-07-15 12:59:57.759368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:41.746 [2024-07-15 12:59:57.774198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fc998 00:17:41.746 [2024-07-15 12:59:57.777023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.746 [2024-07-15 12:59:57.777076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:41.746 [2024-07-15 12:59:57.791735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fc128 00:17:41.746 [2024-07-15 12:59:57.794360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.746 [2024-07-15 12:59:57.794439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.809094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fb8b8 00:17:42.005 [2024-07-15 12:59:57.811633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.811683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.825927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fb048 00:17:42.005 [2024-07-15 12:59:57.828453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.828497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.842843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190fa7d8 00:17:42.005 [2024-07-15 12:59:57.845186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.845228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.859709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f9f68 00:17:42.005 [2024-07-15 12:59:57.862154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.862195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.875989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f96f8 00:17:42.005 [2024-07-15 12:59:57.878305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.878346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.892122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f8e88 00:17:42.005 [2024-07-15 12:59:57.894396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.894435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.908217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f8618 00:17:42.005 [2024-07-15 12:59:57.910477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.910516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.924595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f7da8 00:17:42.005 [2024-07-15 12:59:57.926949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.926985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.941029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f7538 00:17:42.005 [2024-07-15 12:59:57.943349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.943394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.957029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f6cc8 00:17:42.005 [2024-07-15 12:59:57.959206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.959244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.972970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f6458 00:17:42.005 [2024-07-15 12:59:57.975196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.975233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:57.989319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f5be8 00:17:42.005 [2024-07-15 12:59:57.991540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:57.991577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:58.005651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f5378 00:17:42.005 [2024-07-15 12:59:58.007836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:58.007875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:58.022266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f4b08 00:17:42.005 [2024-07-15 12:59:58.024436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:58.024472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:58.038613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f4298 00:17:42.005 [2024-07-15 12:59:58.040766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:58.040805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:42.005 [2024-07-15 12:59:58.055669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f3a28 00:17:42.005 [2024-07-15 12:59:58.057980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.005 [2024-07-15 12:59:58.058035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.072807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f31b8 00:17:42.264 [2024-07-15 12:59:58.075068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.075102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.090145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f2948 00:17:42.264 [2024-07-15 12:59:58.092254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.092291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.107224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f20d8 00:17:42.264 [2024-07-15 12:59:58.109482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.109575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.124902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f1868 00:17:42.264 [2024-07-15 12:59:58.127112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.127167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.142368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f0ff8 00:17:42.264 [2024-07-15 12:59:58.144611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.144681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.159424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f0788 00:17:42.264 [2024-07-15 12:59:58.161597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.161651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.176468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eff18 00:17:42.264 [2024-07-15 12:59:58.178547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.178616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.193496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ef6a8 00:17:42.264 [2024-07-15 12:59:58.195529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.195581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.210986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eee38 00:17:42.264 [2024-07-15 12:59:58.213280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.213320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.228531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ee5c8 00:17:42.264 [2024-07-15 12:59:58.230466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.230503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.245688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190edd58 00:17:42.264 [2024-07-15 12:59:58.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.247710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.262808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ed4e8 00:17:42.264 [2024-07-15 12:59:58.264830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.264881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.280439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ecc78 00:17:42.264 [2024-07-15 12:59:58.282474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.282551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.298203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ec408 00:17:42.264 [2024-07-15 12:59:58.300283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.300349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:42.264 [2024-07-15 12:59:58.315852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ebb98 00:17:42.264 [2024-07-15 12:59:58.317810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.264 [2024-07-15 12:59:58.317852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.333079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eb328 00:17:42.523 [2024-07-15 12:59:58.334989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.335042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.350335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eaab8 00:17:42.523 [2024-07-15 12:59:58.352304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.352356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.367905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ea248 00:17:42.523 [2024-07-15 12:59:58.369983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.370038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.385748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e99d8 00:17:42.523 [2024-07-15 12:59:58.387610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.387657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.403400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e9168 00:17:42.523 [2024-07-15 12:59:58.405294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.405349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.420925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e88f8 00:17:42.523 [2024-07-15 12:59:58.422687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.422754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.438131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e8088 00:17:42.523 [2024-07-15 12:59:58.439789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.439827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.454036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e7818 00:17:42.523 [2024-07-15 12:59:58.455677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.455712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.470942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e6fa8 00:17:42.523 [2024-07-15 12:59:58.472726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.472766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.488445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e6738 00:17:42.523 [2024-07-15 12:59:58.490198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.490250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.505959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e5ec8 00:17:42.523 [2024-07-15 12:59:58.507739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.507801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.524035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e5658 00:17:42.523 [2024-07-15 12:59:58.525653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.525707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.541692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e4de8 00:17:42.523 [2024-07-15 12:59:58.543395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.543494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:42.523 [2024-07-15 12:59:58.558903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e4578 00:17:42.523 [2024-07-15 12:59:58.560700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.523 [2024-07-15 12:59:58.560738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:42.524 [2024-07-15 12:59:58.576331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e3d08 00:17:42.524 [2024-07-15 12:59:58.577890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.524 [2024-07-15 12:59:58.577941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.593684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e3498 00:17:42.783 [2024-07-15 12:59:58.595284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.595322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.611794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e2c28 00:17:42.783 [2024-07-15 12:59:58.613398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.613474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.629371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e23b8 00:17:42.783 [2024-07-15 12:59:58.630930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.630981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.646544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e1b48 00:17:42.783 [2024-07-15 12:59:58.648028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.648095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.663998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e12d8 00:17:42.783 [2024-07-15 12:59:58.665520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.665620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.681637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e0a68 00:17:42.783 [2024-07-15 12:59:58.683151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.683220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.699260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e01f8 00:17:42.783 [2024-07-15 12:59:58.700754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.700795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.717071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190df988 00:17:42.783 [2024-07-15 12:59:58.718587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.718636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.734950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190df118 00:17:42.783 [2024-07-15 12:59:58.736294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.736333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.751790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190de8a8 00:17:42.783 [2024-07-15 12:59:58.753212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.753277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.769529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190de038 00:17:42.783 [2024-07-15 12:59:58.770957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.771024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.794235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190de038 00:17:42.783 [2024-07-15 12:59:58.797033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.797104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.812137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190de8a8 00:17:42.783 [2024-07-15 12:59:58.814852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.814925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:42.783 [2024-07-15 12:59:58.829826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190df118 00:17:42.783 [2024-07-15 12:59:58.832678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.783 [2024-07-15 12:59:58.832715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.846969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190df988 00:17:43.042 [2024-07-15 12:59:58.849650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.849720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.864210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e01f8 00:17:43.042 [2024-07-15 12:59:58.867006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.867075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.881805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e0a68 00:17:43.042 [2024-07-15 12:59:58.884414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.884474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.899120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e12d8 00:17:43.042 [2024-07-15 12:59:58.901819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.901905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.917136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e1b48 00:17:43.042 [2024-07-15 12:59:58.919785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.919851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.934760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e23b8 00:17:43.042 [2024-07-15 12:59:58.937268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.937340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.952416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e2c28 00:17:43.042 [2024-07-15 12:59:58.954987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.955055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.970065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e3498 00:17:43.042 [2024-07-15 12:59:58.972599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.972666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:58.986988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e3d08 00:17:43.042 [2024-07-15 12:59:58.989329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:58.989379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:59.004302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e4578 00:17:43.042 [2024-07-15 12:59:59.006765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:59.006835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:59.021639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e4de8 00:17:43.042 [2024-07-15 12:59:59.024158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:59.024224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:59.039284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e5658 00:17:43.042 [2024-07-15 12:59:59.041639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:59.041706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:59.055873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e5ec8 00:17:43.042 [2024-07-15 12:59:59.058224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:59.058266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:59.072500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e6738 00:17:43.042 [2024-07-15 12:59:59.074786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:59.074826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:43.042 [2024-07-15 12:59:59.089280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e6fa8 00:17:43.042 [2024-07-15 12:59:59.091582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.042 [2024-07-15 12:59:59.091620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.106018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e7818 00:17:43.301 [2024-07-15 12:59:59.108403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.122945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e8088 00:17:43.301 [2024-07-15 12:59:59.125204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.125291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.140157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e88f8 00:17:43.301 [2024-07-15 12:59:59.142373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.142428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.156710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e9168 00:17:43.301 [2024-07-15 12:59:59.158845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.158880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.173643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190e99d8 00:17:43.301 [2024-07-15 12:59:59.175828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.175865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.190203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ea248 00:17:43.301 [2024-07-15 12:59:59.192294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.192331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.206504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eaab8 00:17:43.301 [2024-07-15 12:59:59.208683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.208720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.223817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eb328 00:17:43.301 [2024-07-15 12:59:59.225984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.226074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.241192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ebb98 00:17:43.301 [2024-07-15 12:59:59.243230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.243266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.258222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ec408 00:17:43.301 [2024-07-15 12:59:59.260304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.260342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.275294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ecc78 00:17:43.301 [2024-07-15 12:59:59.277323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.277369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.292563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ed4e8 00:17:43.301 [2024-07-15 12:59:59.294803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.294854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.310361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190edd58 00:17:43.301 [2024-07-15 12:59:59.312400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.312460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.327634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ee5c8 00:17:43.301 [2024-07-15 12:59:59.329822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.329859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.301 [2024-07-15 12:59:59.346817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eee38 00:17:43.301 [2024-07-15 12:59:59.349392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.301 [2024-07-15 12:59:59.349490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.365226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190ef6a8 00:17:43.560 [2024-07-15 12:59:59.367346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.560 [2024-07-15 12:59:59.367408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.382960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190eff18 00:17:43.560 [2024-07-15 12:59:59.384957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.560 [2024-07-15 12:59:59.385025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.400608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f0788 00:17:43.560 [2024-07-15 12:59:59.402748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.560 [2024-07-15 12:59:59.402800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.418302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f0ff8 00:17:43.560 [2024-07-15 12:59:59.420252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.560 [2024-07-15 12:59:59.420303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.435768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f1868 00:17:43.560 [2024-07-15 12:59:59.437738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.560 [2024-07-15 12:59:59.437774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.453273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f20d8 00:17:43.560 [2024-07-15 12:59:59.455187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.560 [2024-07-15 12:59:59.455222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.470056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f2948 00:17:43.560 [2024-07-15 12:59:59.471890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.560 [2024-07-15 12:59:59.471929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:43.560 [2024-07-15 12:59:59.486993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f31b8 00:17:43.560 [2024-07-15 12:59:59.488948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.488993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:43.561 [2024-07-15 12:59:59.504593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f3a28 00:17:43.561 [2024-07-15 12:59:59.506451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.506537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.561 [2024-07-15 12:59:59.522210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f4298 00:17:43.561 [2024-07-15 12:59:59.524059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.524094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.561 [2024-07-15 12:59:59.539931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f4b08 00:17:43.561 [2024-07-15 12:59:59.541845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.541883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:43.561 [2024-07-15 12:59:59.557527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f5378 00:17:43.561 [2024-07-15 12:59:59.559363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.559426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.561 [2024-07-15 12:59:59.575118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f5be8 00:17:43.561 [2024-07-15 12:59:59.576822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.576870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.561 [2024-07-15 12:59:59.592574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f6458 00:17:43.561 [2024-07-15 12:59:59.594406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.594457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.561 [2024-07-15 12:59:59.610260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f6cc8 00:17:43.561 [2024-07-15 12:59:59.612048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.561 [2024-07-15 12:59:59.612101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.820 [2024-07-15 12:59:59.627915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f7538 00:17:43.820 [2024-07-15 12:59:59.629659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.820 [2024-07-15 12:59:59.629710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:43.820 [2024-07-15 12:59:59.645303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f7da8 00:17:43.820 [2024-07-15 12:59:59.646963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.820 [2024-07-15 12:59:59.646998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:43.820 [2024-07-15 12:59:59.662921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa360) with pdu=0x2000190f8618 00:17:43.820 [2024-07-15 12:59:59.664625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.820 [2024-07-15 12:59:59.664691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:43.820 00:17:43.820 Latency(us) 00:17:43.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.820 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.820 nvme0n1 : 2.00 14654.76 57.25 0.00 0.00 8726.21 2472.49 33840.41 00:17:43.820 =================================================================================================================== 00:17:43.820 Total : 14654.76 57.25 0.00 0.00 8726.21 2472.49 33840.41 00:17:43.820 0 00:17:43.820 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:43.820 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:43.820 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:43.820 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:43.820 | .driver_specific 00:17:43.820 | .nvme_error 00:17:43.820 | .status_code 00:17:43.820 | .command_transient_transport_error' 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80618 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80618 ']' 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80618 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80618 00:17:44.079 killing process with pid 80618 00:17:44.079 Received shutdown signal, test time was about 2.000000 seconds 00:17:44.079 00:17:44.079 Latency(us) 00:17:44.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.079 =================================================================================================================== 00:17:44.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80618' 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80618 00:17:44.079 12:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80618 00:17:44.338 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80675 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80675 /var/tmp/bperf.sock 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80675 ']' 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:44.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.339 13:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:44.339 [2024-07-15 13:00:00.294192] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:44.339 [2024-07-15 13:00:00.294891] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80675 ] 00:17:44.339 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:44.339 Zero copy mechanism will not be used. 00:17:44.597 [2024-07-15 13:00:00.436814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.597 [2024-07-15 13:00:00.560082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.597 [2024-07-15 13:00:00.620436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:45.533 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.533 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:45.533 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:45.533 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:45.533 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:45.533 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.533 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.792 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.792 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.051 nvme0n1 00:17:46.051 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:46.051 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.051 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.051 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.051 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:46.051 13:00:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:46.051 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.051 Zero copy mechanism will not be used. 00:17:46.051 Running I/O for 2 seconds... 00:17:46.051 [2024-07-15 13:00:02.033956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.034243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.034270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.038986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.039354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.039439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.044069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.044309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.044345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.049128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.049226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.049250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.054540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.054683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.054707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.059638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.059714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.059736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.064537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.064676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.064700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.069772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.051 [2024-07-15 13:00:02.069852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.051 [2024-07-15 13:00:02.069873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.051 [2024-07-15 13:00:02.074581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.074666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.074687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.052 [2024-07-15 13:00:02.079175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.079264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.079284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.052 [2024-07-15 13:00:02.083798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.083881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.083900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.052 [2024-07-15 13:00:02.088331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.088453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.088474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.052 [2024-07-15 13:00:02.092991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.093077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.093097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.052 [2024-07-15 13:00:02.097657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.097747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.097767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.052 [2024-07-15 13:00:02.102166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.102260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.102281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.052 [2024-07-15 13:00:02.106874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.052 [2024-07-15 13:00:02.106961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.052 [2024-07-15 13:00:02.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.111559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.111639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.111659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.116065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.116153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.116172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.120752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.120841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.120861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.125376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.125486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.125506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.130007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.130095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.130115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.134658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.134746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.134765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.139287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.139370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.139418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.143908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.143995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.144014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.148480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.148559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.148579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.153216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.153294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.157827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.157917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.157937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.162500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.162615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.162634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.167090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.167177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.167196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.171786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.171866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.171885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.176282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.176365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.176428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.180900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.181014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.181034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.185481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.185573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.185592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.189959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.190047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.190066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.194602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.194691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.194710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.199128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.199214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.199234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.203711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.203794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.203814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.208210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.208287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.208307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.212930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.213049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.213069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.217523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.217608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.217627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.222097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.222191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.222211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.226724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.226807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.226826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.231285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.231376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.231429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.235895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.235977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.235997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.240481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.240566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.240586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.245099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.245188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.245207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.249717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.249806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.249826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.254296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.254378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.254414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.258884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.258961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.258981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.263468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.263545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.263565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.268027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.268104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.268125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.272593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.272718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.272739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.277172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.277263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.277282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.281839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.281929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.281948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.286429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.286518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.286537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.290947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.291031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.291050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.295523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.295605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.295624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.300061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.300148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.300167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.304623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.304733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.304753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.309242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.309320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.309339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.313939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.314017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.314036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.318511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.318634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.318657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.323020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.323100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.323119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.327628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.327712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.327732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.332142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.332223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.317 [2024-07-15 13:00:02.332243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.317 [2024-07-15 13:00:02.336768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.317 [2024-07-15 13:00:02.336861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.336883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.341597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.341686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.341706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.346189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.346272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.346300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.350901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.350989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.351008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.355461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.355547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.355568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.359993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.360083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.360103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.364691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.364763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.364783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.369311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.369432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.369479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.318 [2024-07-15 13:00:02.373862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.318 [2024-07-15 13:00:02.373942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.318 [2024-07-15 13:00:02.373963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.378412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.378490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.378511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.382929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.383009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.383029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.387487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.387564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.387584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.391973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.392050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.392070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.396550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.396629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.396677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.401129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.401204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.401223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.405816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.405892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.405912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.410432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.410520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.410540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.414925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.415008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.415028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.419580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.419671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.575 [2024-07-15 13:00:02.419691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.575 [2024-07-15 13:00:02.424099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.575 [2024-07-15 13:00:02.424177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.424196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.428715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.428796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.428816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.433394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.433481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.433501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.437921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.438009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.438029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.442473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.442550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.442570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.446993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.447067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.447087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.451641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.451719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.451739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.456059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.456136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.456157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.460603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.460691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.460712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.465200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.465281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.465301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.470330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.470422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.470455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.475840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.475920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.475944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.481285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.481351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.481387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.486433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.486569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.486591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.491761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.491836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.491859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.496769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.496841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.496863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.501881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.501963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.501988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.507012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.507101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.507133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.512015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.512085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.512110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.516746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.516833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.516864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.521870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.521982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.522022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.527894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.528083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.528116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.533214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.533355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.533387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.538408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.538593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.538628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.543351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.543472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.543497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.548718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.548798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.548823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.553929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.554000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.554026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.559234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.559309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.559334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.564491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.564587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.564611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.569758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.569857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.569879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.575030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.575163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.575211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.580242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.580374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.580398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.585459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.585594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.585617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.590659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.590762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.590801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.595944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.596088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.596109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.601298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.601439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.601476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.606628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.606741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.606764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.611919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.612041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.612065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.617690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.617767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.617792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.622966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.623147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.623172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.628076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.628204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.628229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.576 [2024-07-15 13:00:02.633352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.576 [2024-07-15 13:00:02.633477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.576 [2024-07-15 13:00:02.633515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.638725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.638839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.638864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.643849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.643959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.643981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.648838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.649077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.649147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.653655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.653855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.658775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.659124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.659165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.663979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.664347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.664399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.668778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.668850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.668879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.673760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.673862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.673891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.678684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.678822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.678851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.683714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.683832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.683857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.688677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.688759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.688783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.693712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.693824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.693856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.698775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.698965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.698996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.838 [2024-07-15 13:00:02.703769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.838 [2024-07-15 13:00:02.703879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.838 [2024-07-15 13:00:02.703903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.708889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.709261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.709331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.714663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.714787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.714811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.719487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.719605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.719628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.724446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.724543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.724565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.729364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.729553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.729589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.734477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.734590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.734611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.739206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.739308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.739329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.744025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.744128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.744148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.749123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.749232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.749256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.754571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.754673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.754697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.759817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.759895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.759917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.765105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.765204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.765226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.770479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.770554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.770576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.775597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.775689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.775711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.780749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.780831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.780853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.785791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.785895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.785917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.790751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.790845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.790867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.795682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.795798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.795820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.800808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.800885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.800907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.806017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.806117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.806145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.811306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.811401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.811426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.816382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.816489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.839 [2024-07-15 13:00:02.816512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.839 [2024-07-15 13:00:02.821628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.839 [2024-07-15 13:00:02.821761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.821782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.826753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.826862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.826883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.831818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.831912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.831949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.837107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.837225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.837249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.842223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.842323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.842347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.847492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.847563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.847587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.852670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.852756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.852779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.857701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.857770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.857794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.862478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.862597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.862618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.867294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.867436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.867457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.872064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.872159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.872180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.876787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.876886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.876908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.881781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.881901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.886528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.886626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.886647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.840 [2024-07-15 13:00:02.891292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:46.840 [2024-07-15 13:00:02.891426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.840 [2024-07-15 13:00:02.891447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.896130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.896243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.896265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.901333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.901473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.901495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.906062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.906161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.906183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.910882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.910980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.911002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.915634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.915731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.915752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.920745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.920851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.920873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.925505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.925624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.925645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.930274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.930369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.930391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.935105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.935204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.935225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.939990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.940104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.940126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.944805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.944889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.944911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.949453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.949550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.949570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.954182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.954298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.954319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.958941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.959035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.959058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.963835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.963942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.963963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.968696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.968781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.968803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.973442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.973535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.973556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.978223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.978322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.978344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.983329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.983443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.983464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.988135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.988230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.988250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.993132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.993249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.993271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:02.998075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:02.998223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:02.998244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:03.002973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:03.003072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:03.003093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:03.008205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.100 [2024-07-15 13:00:03.008326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.100 [2024-07-15 13:00:03.008349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.100 [2024-07-15 13:00:03.014089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.014220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.014244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.019605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.019729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.019754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.025216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.025350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.025372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.031055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.031186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.031210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.036461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.036557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.036602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.041666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.041783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.041809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.046937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.047045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.047066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.051638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.051739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.051760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.056164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.056275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.056296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.061293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.061424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.061460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.066671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.066912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.067011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.071925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.072039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.072064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.077495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.077665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.077690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.082747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.082882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.087858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.087960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.087982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.092792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.092897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.092922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.097850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.097946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.097968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.102714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.102823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.102848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.107296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.107407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.107442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.112030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.112126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.112147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.116758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.116842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.116864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.121423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.121531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.121552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.126017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.126121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.126142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.130728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.130822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.130842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.135322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.135424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.135444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.139918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.140019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.140041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.144632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.144743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.144765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.149278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.153993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.154088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.101 [2024-07-15 13:00:03.158787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.101 [2024-07-15 13:00:03.158889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.101 [2024-07-15 13:00:03.158909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.163473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.163572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.163593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.168139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.168238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.168260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.172978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.173090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.173110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.177633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.177735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.177756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.182181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.182289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.182309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.186920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.187015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.187036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.191494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.191599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.191619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.196013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.360 [2024-07-15 13:00:03.196108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.360 [2024-07-15 13:00:03.196128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.360 [2024-07-15 13:00:03.200587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.200707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.200728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.205371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.205480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.205500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.209968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.210073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.210094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.214756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.214852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.214874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.219299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.219438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.219459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.224026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.224120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.224140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.228541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.228650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.228672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.233081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.233193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.233212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.237781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.237881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.237902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.242260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.242361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.242382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.246805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.246913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.246940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.251292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.251409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.251430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.255941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.256040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.256060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.260614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.260730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.260750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.265354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.265463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.265483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.270092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.270192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.270219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.275257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.275372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.275395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.280097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.280198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.280221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.285152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.285296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.285318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.290716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.290953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.290980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.296275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.296356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.296379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.301852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.301960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.301980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.307215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.307337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.307361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.312378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.312505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.312555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.317491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.317657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.317678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.322497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.322654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.322673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.327232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.327373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.327394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.331923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.332030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.332050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.336713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.336806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.361 [2024-07-15 13:00:03.336828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.361 [2024-07-15 13:00:03.341472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.361 [2024-07-15 13:00:03.341581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.341601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.346091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.346211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.346240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.350843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.350958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.350977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.355427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.355581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.355600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.360155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.360297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.360316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.365061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.365180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.365200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.369773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.369869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.369889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.374389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.374553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.374574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.379081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.379203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.379223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.384071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.384231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.384253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.389094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.389207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.389228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.393754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.393851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.393871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.398479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.398592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.398613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.403137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.403291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.403311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.407946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.408085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.408104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.412397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.412514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.412536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.362 [2024-07-15 13:00:03.416915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.362 [2024-07-15 13:00:03.417035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.362 [2024-07-15 13:00:03.417055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.421354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.421469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.421488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.425834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.425940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.425960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.430233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.430369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.430400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.434753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.434859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.434878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.439142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.439281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.443683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.443760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.443779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.448078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.448236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.448255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.452539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.452666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.452687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.457066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.457173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.457194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.461680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.461786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.461806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.466054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.466159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.466179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.470450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.470559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.470578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.474814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.474955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.474975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.479241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.479398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.479419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.483796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.483934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.483954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.488150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.488271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.488290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.492633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.492774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.492794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.497861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.497969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.497989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.503376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.503548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.503569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.508404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.508552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.508572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.513260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.513377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.513398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.518092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.518217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.518240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.522889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.522996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.523016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.527523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.527608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.527629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.532192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.532299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.532320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.536902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.537046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.537066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.541609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.541707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.541727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.546626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.546720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.621 [2024-07-15 13:00:03.546755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.621 [2024-07-15 13:00:03.551643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.621 [2024-07-15 13:00:03.551754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.551789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.556830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.556930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.556957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.562344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.562444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.562466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.567540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.567654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.567677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.572704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.572778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.572801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.577676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.577809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.577831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.582770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.582895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.582916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.587545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.587645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.587665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.592126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.592246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.592268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.596833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.596918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.596940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.601470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.601577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.601597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.606230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.606373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.606395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.610955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.611064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.611084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.615542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.615723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.615745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.620322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.620734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.620774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.625253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.625621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.625657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.630069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.630445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.630485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.634823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.635165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.635220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.639507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.639847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.639878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.644294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.644707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.644746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.649262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.649608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.649660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.654028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.654358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.654398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.658924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.659255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.659288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.663916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.664266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.664299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.668844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.669247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.669283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.673684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.674023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.674056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.622 [2024-07-15 13:00:03.678447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.622 [2024-07-15 13:00:03.678796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.622 [2024-07-15 13:00:03.678831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.882 [2024-07-15 13:00:03.683240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.882 [2024-07-15 13:00:03.683598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.882 [2024-07-15 13:00:03.683630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.882 [2024-07-15 13:00:03.688119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.882 [2024-07-15 13:00:03.688489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.882 [2024-07-15 13:00:03.688523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.882 [2024-07-15 13:00:03.693065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.882 [2024-07-15 13:00:03.693431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.882 [2024-07-15 13:00:03.693475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.882 [2024-07-15 13:00:03.698456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.698832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.698871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.703671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.704014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.704047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.708841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.709188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.709227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.714049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.714387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.714430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.719298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.719655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.719694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.724573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.724916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.724956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.729648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.729998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.730034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.734874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.735214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.735248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.739945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.740289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.740323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.744892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.745250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.745284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.750079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.750431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.750464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.755011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.755348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.755388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.760129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.760530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.760566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.765157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.765517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.765557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.770282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.770590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.770636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.775649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.776009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.776047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.781078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.781400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.781460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.786321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.786632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.786666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.791849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.792180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.792226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.797379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.797740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.797778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.802645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.803012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.803049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.807810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.808166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.808206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.813379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.813731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.813771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.819063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.819375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.819400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.824112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.824424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.824459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.829331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.883 [2024-07-15 13:00:03.829654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.883 [2024-07-15 13:00:03.829692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.883 [2024-07-15 13:00:03.834786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.835153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.835194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.840299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.840738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.840779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.846163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.846532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.846572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.851810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.852106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.852142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.857192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.857500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.857551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.862260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.862569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.862618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.867430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.867756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.867790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.872708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.873038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.873075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.877711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.878047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.878082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.882539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.882877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.882910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.887211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.887561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.887592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.892252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.892611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.892668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.897002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.897346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.897390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.901806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.902148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.902180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.906584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.906916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.906951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.911270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.911639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.911679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.915983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.916324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.916371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.920881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.921266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.921306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.925725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.926067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.926101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.930523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.930907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.930950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.935188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.935539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.935571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.884 [2024-07-15 13:00:03.940051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:47.884 [2024-07-15 13:00:03.940387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.884 [2024-07-15 13:00:03.940428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.143 [2024-07-15 13:00:03.944844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.143 [2024-07-15 13:00:03.945182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.143 [2024-07-15 13:00:03.945216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.143 [2024-07-15 13:00:03.949679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.143 [2024-07-15 13:00:03.950012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.143 [2024-07-15 13:00:03.950044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.143 [2024-07-15 13:00:03.954462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.143 [2024-07-15 13:00:03.954801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.143 [2024-07-15 13:00:03.954835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.143 [2024-07-15 13:00:03.959323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.143 [2024-07-15 13:00:03.959708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.143 [2024-07-15 13:00:03.959745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.143 [2024-07-15 13:00:03.964062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.143 [2024-07-15 13:00:03.964403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.143 [2024-07-15 13:00:03.964424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.143 [2024-07-15 13:00:03.969035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.143 [2024-07-15 13:00:03.969376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:03.969417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:03.973947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:03.974278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:03.974310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:03.978831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:03.979152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:03.979186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:03.983550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:03.983878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:03.983912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:03.988580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:03.988919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:03.988957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:03.993427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:03.993772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:03.993809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:03.998265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:03.998628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:03.998666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:04.003560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:04.003914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:04.003949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:04.008612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:04.008987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:04.009030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:04.013832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:04.014175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:04.014211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:04.018990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:04.019344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:04.019388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:04.024227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:04.024588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:04.024625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.144 [2024-07-15 13:00:04.029136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23fa500) with pdu=0x2000190fef90 00:17:48.144 [2024-07-15 13:00:04.029331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.144 [2024-07-15 13:00:04.029351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.144 00:17:48.144 Latency(us) 00:17:48.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.144 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:48.144 nvme0n1 : 2.00 6335.91 791.99 0.00 0.00 2519.31 1519.24 5898.24 00:17:48.144 =================================================================================================================== 00:17:48.144 Total : 6335.91 791.99 0.00 0.00 2519.31 1519.24 5898.24 00:17:48.144 0 00:17:48.144 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:48.144 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:48.144 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:48.144 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:48.144 | .driver_specific 00:17:48.144 | .nvme_error 00:17:48.144 | .status_code 00:17:48.144 | .command_transient_transport_error' 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 409 > 0 )) 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80675 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80675 ']' 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80675 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80675 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:48.403 killing process with pid 80675 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80675' 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80675 00:17:48.403 Received shutdown signal, test time was about 2.000000 seconds 00:17:48.403 00:17:48.403 Latency(us) 00:17:48.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.403 =================================================================================================================== 00:17:48.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.403 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80675 00:17:48.661 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80471 00:17:48.661 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80471 ']' 00:17:48.661 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80471 00:17:48.661 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:48.662 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.662 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80471 00:17:48.662 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:48.662 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:48.662 killing process with pid 80471 00:17:48.662 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80471' 00:17:48.662 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80471 00:17:48.662 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80471 00:17:48.921 00:17:48.921 real 0m18.230s 00:17:48.921 user 0m35.126s 00:17:48.921 sys 0m4.767s 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 ************************************ 00:17:48.921 END TEST nvmf_digest_error 00:17:48.921 ************************************ 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.921 rmmod nvme_tcp 00:17:48.921 rmmod nvme_fabrics 00:17:48.921 rmmod nvme_keyring 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80471 ']' 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80471 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80471 ']' 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80471 00:17:48.921 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80471) - No such process 00:17:48.921 Process with pid 80471 is not found 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80471 is not found' 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.921 13:00:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:49.181 00:17:49.181 real 0m37.563s 00:17:49.181 user 1m11.332s 00:17:49.181 sys 0m9.740s 00:17:49.181 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.181 13:00:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:49.181 ************************************ 00:17:49.181 END TEST nvmf_digest 00:17:49.181 ************************************ 00:17:49.181 13:00:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:49.181 13:00:05 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:17:49.181 13:00:05 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:17:49.181 13:00:05 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:49.181 13:00:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:49.181 13:00:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.181 13:00:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:49.181 ************************************ 00:17:49.181 START TEST nvmf_host_multipath 00:17:49.181 ************************************ 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:49.181 * Looking for test storage... 00:17:49.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:49.181 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:49.182 Cannot find device "nvmf_tgt_br" 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.182 Cannot find device "nvmf_tgt_br2" 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:49.182 Cannot find device "nvmf_tgt_br" 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:49.182 Cannot find device "nvmf_tgt_br2" 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:49.182 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:49.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:49.441 00:17:49.441 --- 10.0.0.2 ping statistics --- 00:17:49.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.441 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:49.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:49.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:49.441 00:17:49.441 --- 10.0.0.3 ping statistics --- 00:17:49.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.441 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:49.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:49.441 00:17:49.441 --- 10.0.0.1 ping statistics --- 00:17:49.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.441 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80945 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80945 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80945 ']' 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.441 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.442 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.442 13:00:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:49.700 [2024-07-15 13:00:05.526156] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:49.700 [2024-07-15 13:00:05.526295] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.700 [2024-07-15 13:00:05.665687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:49.958 [2024-07-15 13:00:05.775095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.958 [2024-07-15 13:00:05.775172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.958 [2024-07-15 13:00:05.775199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.958 [2024-07-15 13:00:05.775206] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.958 [2024-07-15 13:00:05.775213] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.958 [2024-07-15 13:00:05.775378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.958 [2024-07-15 13:00:05.775388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.958 [2024-07-15 13:00:05.827141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80945 00:17:50.545 13:00:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:50.803 [2024-07-15 13:00:06.781775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.803 13:00:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:51.061 Malloc0 00:17:51.061 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:51.319 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.577 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.835 [2024-07-15 13:00:07.700168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.836 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:52.094 [2024-07-15 13:00:07.916255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81001 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81001 /var/tmp/bdevperf.sock 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81001 ']' 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.094 13:00:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.029 13:00:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.029 13:00:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:53.029 13:00:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:53.286 13:00:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:53.544 Nvme0n1 00:17:53.544 13:00:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:54.108 Nvme0n1 00:17:54.108 13:00:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:54.108 13:00:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:55.043 13:00:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:55.043 13:00:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:55.302 13:00:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:55.561 13:00:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:55.561 13:00:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81045 00:17:55.561 13:00:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80945 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:55.561 13:00:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.118 Attaching 4 probes... 00:18:02.118 @path[10.0.0.2, 4421]: 18712 00:18:02.118 @path[10.0.0.2, 4421]: 17269 00:18:02.118 @path[10.0.0.2, 4421]: 17209 00:18:02.118 @path[10.0.0.2, 4421]: 17176 00:18:02.118 @path[10.0.0.2, 4421]: 17361 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81045 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:02.118 13:00:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:02.118 13:00:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:02.376 13:00:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:02.376 13:00:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81159 00:18:02.376 13:00:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80945 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:02.376 13:00:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:09.007 Attaching 4 probes... 00:18:09.007 @path[10.0.0.2, 4420]: 16803 00:18:09.007 @path[10.0.0.2, 4420]: 17273 00:18:09.007 @path[10.0.0.2, 4420]: 17175 00:18:09.007 @path[10.0.0.2, 4420]: 17212 00:18:09.007 @path[10.0.0.2, 4420]: 18727 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81159 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:09.007 13:00:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:09.265 13:00:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:09.265 13:00:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81277 00:18:09.265 13:00:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80945 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:09.265 13:00:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.829 Attaching 4 probes... 00:18:15.829 @path[10.0.0.2, 4421]: 13794 00:18:15.829 @path[10.0.0.2, 4421]: 16773 00:18:15.829 @path[10.0.0.2, 4421]: 16872 00:18:15.829 @path[10.0.0.2, 4421]: 16784 00:18:15.829 @path[10.0.0.2, 4421]: 16641 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81277 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:15.829 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:16.088 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:16.088 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80945 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:16.088 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81384 00:18:16.088 13:00:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.639 Attaching 4 probes... 00:18:22.639 00:18:22.639 00:18:22.639 00:18:22.639 00:18:22.639 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81384 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:22.639 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:22.898 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:22.898 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81502 00:18:22.898 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:22.898 13:00:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80945 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:29.460 13:00:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:29.460 13:00:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.460 Attaching 4 probes... 00:18:29.460 @path[10.0.0.2, 4421]: 15526 00:18:29.460 @path[10.0.0.2, 4421]: 16526 00:18:29.460 @path[10.0.0.2, 4421]: 16779 00:18:29.460 @path[10.0.0.2, 4421]: 17163 00:18:29.460 @path[10.0.0.2, 4421]: 15800 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81502 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:29.460 13:00:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:30.393 13:00:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:30.393 13:00:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81620 00:18:30.393 13:00:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80945 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:30.393 13:00:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.977 Attaching 4 probes... 00:18:36.977 @path[10.0.0.2, 4420]: 17663 00:18:36.977 @path[10.0.0.2, 4420]: 17107 00:18:36.977 @path[10.0.0.2, 4420]: 16764 00:18:36.977 @path[10.0.0.2, 4420]: 16860 00:18:36.977 @path[10.0.0.2, 4420]: 17000 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81620 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:36.977 [2024-07-15 13:00:52.973735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:36.977 13:00:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:37.543 13:00:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:44.103 13:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:44.104 13:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81800 00:18:44.104 13:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80945 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:44.104 13:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:49.393 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:49.393 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.652 Attaching 4 probes... 00:18:49.652 @path[10.0.0.2, 4421]: 17640 00:18:49.652 @path[10.0.0.2, 4421]: 17484 00:18:49.652 @path[10.0.0.2, 4421]: 17352 00:18:49.652 @path[10.0.0.2, 4421]: 17374 00:18:49.652 @path[10.0.0.2, 4421]: 17314 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81800 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81001 00:18:49.652 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81001 ']' 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81001 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81001 00:18:49.653 killing process with pid 81001 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81001' 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81001 00:18:49.653 13:01:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81001 00:18:49.922 Connection closed with partial response: 00:18:49.922 00:18:49.922 00:18:49.922 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81001 00:18:49.922 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:49.922 [2024-07-15 13:00:07.993729] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:49.922 [2024-07-15 13:00:07.993973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81001 ] 00:18:49.922 [2024-07-15 13:00:08.131950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.922 [2024-07-15 13:00:08.244305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.922 [2024-07-15 13:00:08.299474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:49.922 Running I/O for 90 seconds... 00:18:49.922 [2024-07-15 13:00:18.332716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.332812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.332886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.332912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.332940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.332959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.332985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.333003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.333045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.333089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.333131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.333177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.333220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.922 [2024-07-15 13:00:18.333264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:49.922 [2024-07-15 13:00:18.333668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.922 [2024-07-15 13:00:18.333686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.333711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.333729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.333768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.333800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.333823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.333840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.333880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.333897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.333921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.333950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.333993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.923 [2024-07-15 13:00:18.334398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.334968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.334993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.335011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.335035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.923 [2024-07-15 13:00:18.335053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:49.923 [2024-07-15 13:00:18.335079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.924 [2024-07-15 13:00:18.335951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.335976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.335994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.924 [2024-07-15 13:00:18.336464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:49.924 [2024-07-15 13:00:18.336489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.336507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.336551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.336595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.336639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.336699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.336768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.336812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.336855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.336908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.336954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.336979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.336997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.337041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.925 [2024-07-15 13:00:18.337084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.925 [2024-07-15 13:00:18.337815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.925 [2024-07-15 13:00:18.337840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.337862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.337887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.337906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.337931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.337948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.337974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.337992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.338026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.338046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.338071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.338090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.338115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.338134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.926 [2024-07-15 13:00:18.339688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.339743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.339787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.339831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.339873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.339916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.339959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.339984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:18.340590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:18.340609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:24.857911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:24.857988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:24.858055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:24.858081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:24.858108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:24.858127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:24.858152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:24.858170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:24.858194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.926 [2024-07-15 13:00:24.858247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:49.926 [2024-07-15 13:00:24.858274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.858742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.858783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.858843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.858932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.858974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.858999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.927 [2024-07-15 13:00:24.859567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.859789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.927 [2024-07-15 13:00:24.859837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:49.927 [2024-07-15 13:00:24.859863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.928 [2024-07-15 13:00:24.859881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:49.928 [2024-07-15 13:00:24.859922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.928 [2024-07-15 13:00:24.859939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:49.928 [2024-07-15 13:00:24.859963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.928 [2024-07-15 13:00:24.859980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:49.928 [2024-07-15 13:00:24.860004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.929 [2024-07-15 13:00:24.860503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.929 [2024-07-15 13:00:24.860884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:49.929 [2024-07-15 13:00:24.860908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.860927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.860952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.861670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.861714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.861756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.861799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.861842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.861910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.861957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.861992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.862012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.862037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.930 [2024-07-15 13:00:24.862056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.862098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.862119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.862145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.862163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.862188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.862205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.862230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.862248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.862272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.930 [2024-07-15 13:00:24.862290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:49.930 [2024-07-15 13:00:24.862315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.862798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.862848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.862899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.862942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.862967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.862984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.863009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.863027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.863052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.863070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.863104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.863124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.863863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.931 [2024-07-15 13:00:24.863893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.863933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.863954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.863987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:49.931 [2024-07-15 13:00:24.864529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.931 [2024-07-15 13:00:24.864560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.864616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.864667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.864736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.864788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.864846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.864897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.864949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.864982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.865001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:24.865033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:24.865052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.980748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.980817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.980884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.980909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.980936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.980980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.981028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.981070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.981113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.932 [2024-07-15 13:00:31.981197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:49.932 [2024-07-15 13:00:31.981798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.932 [2024-07-15 13:00:31.981816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.981841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.981858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.981883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.981917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.981942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.981959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.981983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.982388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.982970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.982995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.983012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.983037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.983055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.983079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.983096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.983121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.933 [2024-07-15 13:00:31.983138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.983163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.983189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:49.933 [2024-07-15 13:00:31.983229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.933 [2024-07-15 13:00:31.983263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.983936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.983960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.934 [2024-07-15 13:00:31.984801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.934 [2024-07-15 13:00:31.984843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:49.934 [2024-07-15 13:00:31.984877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.984896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.984921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.984939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.984963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.984981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.935 [2024-07-15 13:00:31.985959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.985983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.935 [2024-07-15 13:00:31.986000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.935 [2024-07-15 13:00:31.986023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.935 [2024-07-15 13:00:31.986040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.986063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.936 [2024-07-15 13:00:31.986080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.986103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.936 [2024-07-15 13:00:31.986119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.986158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.936 [2024-07-15 13:00:31.986174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.986197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.936 [2024-07-15 13:00:31.986213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.986252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.936 [2024-07-15 13:00:31.986269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.936 [2024-07-15 13:00:31.987083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.987947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.987982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.988015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.988033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:31.988065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:31.988083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.936 [2024-07-15 13:00:45.378452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.936 [2024-07-15 13:00:45.378495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:49.936 [2024-07-15 13:00:45.378553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.378961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.378986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.937 [2024-07-15 13:00:45.379546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.937 [2024-07-15 13:00:45.379860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.937 [2024-07-15 13:00:45.379885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.379904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.379929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.379946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.379963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.379981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.379997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.380466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.380983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.380999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.381018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.381035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.381059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.938 [2024-07-15 13:00:45.381085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.381106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.938 [2024-07-15 13:00:45.381122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.938 [2024-07-15 13:00:45.381140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.381667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.939 [2024-07-15 13:00:45.381958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.381976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.382000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.382036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.382079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.382113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.382147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.382182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.939 [2024-07-15 13:00:45.382215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f6d0 is same with the state(5) to be set 00:18:49.939 [2024-07-15 13:00:45.382252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.939 [2024-07-15 13:00:45.382270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.939 [2024-07-15 13:00:45.382283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23576 len:8 PRP1 0x0 PRP2 0x0 00:18:49.939 [2024-07-15 13:00:45.382299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.939 [2024-07-15 13:00:45.382316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.939 [2024-07-15 13:00:45.382333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.939 [2024-07-15 13:00:45.382345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23912 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command complete 13:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.940 d manually: 00:18:49.940 [2024-07-15 13:00:45.382490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23920 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23928 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23944 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23952 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23960 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23976 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.382944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.382958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23984 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.382973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.382989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.383002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.383014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23992 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.383030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.383046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.383063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.383077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.383092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.383109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.383121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.383133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24008 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.383148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.383164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.383176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.383189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24016 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.383204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.383220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.383232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.940 [2024-07-15 13:00:45.383244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24024 len:8 PRP1 0x0 PRP2 0x0 00:18:49.940 [2024-07-15 13:00:45.383259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.940 [2024-07-15 13:00:45.383275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.940 [2024-07-15 13:00:45.383287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.941 [2024-07-15 13:00:45.383343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24040 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.941 [2024-07-15 13:00:45.383427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24048 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.941 [2024-07-15 13:00:45.383484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24056 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.941 [2024-07-15 13:00:45.383540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.941 [2024-07-15 13:00:45.383595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24072 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.941 [2024-07-15 13:00:45.383650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24080 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.941 [2024-07-15 13:00:45.383705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.941 [2024-07-15 13:00:45.383717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24088 len:8 PRP1 0x0 PRP2 0x0 00:18:49.941 [2024-07-15 13:00:45.383732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383791] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e9f6d0 was disconnected and freed. reset controller. 00:18:49.941 [2024-07-15 13:00:45.383933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.941 [2024-07-15 13:00:45.383962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.383981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.941 [2024-07-15 13:00:45.383997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.384014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.941 [2024-07-15 13:00:45.384042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.384060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.941 [2024-07-15 13:00:45.384076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.384093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.941 [2024-07-15 13:00:45.384110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.941 [2024-07-15 13:00:45.384132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19100 is same with the state(5) to be set 00:18:49.941 [2024-07-15 13:00:45.385292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:49.941 [2024-07-15 13:00:45.385336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e19100 (9): Bad file descriptor 00:18:49.941 [2024-07-15 13:00:45.385932] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.941 [2024-07-15 13:00:45.385979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e19100 with addr=10.0.0.2, port=4421 00:18:49.941 [2024-07-15 13:00:45.386000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19100 is same with the state(5) to be set 00:18:49.941 [2024-07-15 13:00:45.386042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e19100 (9): Bad file descriptor 00:18:49.941 [2024-07-15 13:00:45.386093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:49.941 [2024-07-15 13:00:45.386116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:49.941 [2024-07-15 13:00:45.386133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:49.941 [2024-07-15 13:00:45.386172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:49.941 [2024-07-15 13:00:45.386193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:49.941 [2024-07-15 13:00:55.464175] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:49.941 Received shutdown signal, test time was about 55.638444 seconds 00:18:49.941 00:18:49.941 Latency(us) 00:18:49.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.941 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.941 Verification LBA range: start 0x0 length 0x4000 00:18:49.941 Nvme0n1 : 55.64 7373.46 28.80 0.00 0.00 17331.47 318.37 7046430.72 00:18:49.941 =================================================================================================================== 00:18:49.941 Total : 7373.46 28.80 0.00 0.00 17331.47 318.37 7046430.72 00:18:50.200 13:01:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.201 rmmod nvme_tcp 00:18:50.201 rmmod nvme_fabrics 00:18:50.201 rmmod nvme_keyring 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80945 ']' 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80945 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80945 ']' 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80945 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.201 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80945 00:18:50.460 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:50.460 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:50.460 killing process with pid 80945 00:18:50.460 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80945' 00:18:50.460 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80945 00:18:50.460 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80945 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:50.719 00:18:50.719 real 1m1.541s 00:18:50.719 user 2m50.284s 00:18:50.719 sys 0m19.080s 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.719 13:01:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.719 ************************************ 00:18:50.719 END TEST nvmf_host_multipath 00:18:50.719 ************************************ 00:18:50.719 13:01:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:50.719 13:01:06 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:50.719 13:01:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:50.719 13:01:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.719 13:01:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.719 ************************************ 00:18:50.719 START TEST nvmf_timeout 00:18:50.719 ************************************ 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:50.719 * Looking for test storage... 00:18:50.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:50.719 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:50.720 Cannot find device "nvmf_tgt_br" 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.720 Cannot find device "nvmf_tgt_br2" 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:50.720 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:50.981 Cannot find device "nvmf_tgt_br" 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:50.981 Cannot find device "nvmf_tgt_br2" 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:50.981 13:01:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.981 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.981 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.981 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.981 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.981 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:51.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:51.240 00:18:51.240 --- 10.0.0.2 ping statistics --- 00:18:51.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.240 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:51.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:51.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:51.240 00:18:51.240 --- 10.0.0.3 ping statistics --- 00:18:51.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.240 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:51.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:18:51.240 00:18:51.240 --- 10.0.0.1 ping statistics --- 00:18:51.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.240 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82106 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82106 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82106 ']' 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.240 13:01:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.240 [2024-07-15 13:01:07.128618] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:51.240 [2024-07-15 13:01:07.128686] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.240 [2024-07-15 13:01:07.264560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.498 [2024-07-15 13:01:07.371964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.498 [2024-07-15 13:01:07.372060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.498 [2024-07-15 13:01:07.372107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.498 [2024-07-15 13:01:07.372115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.498 [2024-07-15 13:01:07.372122] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.498 [2024-07-15 13:01:07.372286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.498 [2024-07-15 13:01:07.372275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.498 [2024-07-15 13:01:07.428213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:52.065 13:01:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.065 13:01:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:52.065 13:01:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.065 13:01:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.065 13:01:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:52.324 13:01:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.324 13:01:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.324 13:01:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:52.582 [2024-07-15 13:01:08.390785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.582 13:01:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:52.840 Malloc0 00:18:52.840 13:01:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:53.099 13:01:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.099 13:01:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.358 [2024-07-15 13:01:09.375900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82155 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82155 /var/tmp/bdevperf.sock 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82155 ']' 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.358 13:01:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.616 [2024-07-15 13:01:09.445840] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:53.616 [2024-07-15 13:01:09.445951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82155 ] 00:18:53.616 [2024-07-15 13:01:09.584595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.875 [2024-07-15 13:01:09.703763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.875 [2024-07-15 13:01:09.759433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:54.441 13:01:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.441 13:01:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:54.441 13:01:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:54.700 13:01:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:54.958 NVMe0n1 00:18:54.958 13:01:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82179 00:18:54.958 13:01:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.958 13:01:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:55.216 Running I/O for 10 seconds... 00:18:56.148 13:01:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.408 [2024-07-15 13:01:12.253238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.408 [2024-07-15 13:01:12.253292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.253986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.253995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.408 [2024-07-15 13:01:12.254852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.408 [2024-07-15 13:01:12.254861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.254872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.254881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.254892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.254901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.254913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.254922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.254934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.254944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.254955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.254968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.254979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.254989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.255711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.255983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.255995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.256004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.409 [2024-07-15 13:01:12.256025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.409 [2024-07-15 13:01:12.256050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b14d0 is same with the state(5) to be set 00:18:56.409 [2024-07-15 13:01:12.256073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:56.409 [2024-07-15 13:01:12.256081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:56.409 [2024-07-15 13:01:12.256090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:18:56.409 [2024-07-15 13:01:12.256105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256165] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11b14d0 was disconnected and freed. reset controller. 00:18:56.409 [2024-07-15 13:01:12.256260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.409 [2024-07-15 13:01:12.256276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.409 [2024-07-15 13:01:12.256299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.409 [2024-07-15 13:01:12.256318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.409 [2024-07-15 13:01:12.256338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.409 [2024-07-15 13:01:12.256347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1166d40 is same with the state(5) to be set 00:18:56.409 [2024-07-15 13:01:12.256575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.409 [2024-07-15 13:01:12.256598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1166d40 (9): Bad file descriptor 00:18:56.409 [2024-07-15 13:01:12.256691] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.409 [2024-07-15 13:01:12.256712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1166d40 with addr=10.0.0.2, port=4420 00:18:56.409 [2024-07-15 13:01:12.256723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1166d40 is same with the state(5) to be set 00:18:56.409 [2024-07-15 13:01:12.256741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1166d40 (9): Bad file descriptor 00:18:56.409 [2024-07-15 13:01:12.256764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:56.409 [2024-07-15 13:01:12.256785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:56.409 [2024-07-15 13:01:12.256806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:56.409 [2024-07-15 13:01:12.256828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.409 [2024-07-15 13:01:12.256844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.409 13:01:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:58.317 [2024-07-15 13:01:14.257204] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.317 [2024-07-15 13:01:14.257280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1166d40 with addr=10.0.0.2, port=4420 00:18:58.317 [2024-07-15 13:01:14.257295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1166d40 is same with the state(5) to be set 00:18:58.317 [2024-07-15 13:01:14.257336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1166d40 (9): Bad file descriptor 00:18:58.317 [2024-07-15 13:01:14.257356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.317 [2024-07-15 13:01:14.257366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:58.317 [2024-07-15 13:01:14.257377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.317 [2024-07-15 13:01:14.257417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.317 [2024-07-15 13:01:14.257430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.317 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:58.317 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:58.317 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:58.574 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:58.574 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:58.574 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:58.574 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:58.832 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:58.832 13:01:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:00.207 [2024-07-15 13:01:16.257642] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.207 [2024-07-15 13:01:16.257707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1166d40 with addr=10.0.0.2, port=4420 00:19:00.207 [2024-07-15 13:01:16.257738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1166d40 is same with the state(5) to be set 00:19:00.207 [2024-07-15 13:01:16.257778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1166d40 (9): Bad file descriptor 00:19:00.207 [2024-07-15 13:01:16.257797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:00.207 [2024-07-15 13:01:16.257806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:00.207 [2024-07-15 13:01:16.257816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:00.208 [2024-07-15 13:01:16.257841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:00.208 [2024-07-15 13:01:16.257852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.738 [2024-07-15 13:01:18.257996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.738 [2024-07-15 13:01:18.258053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.738 [2024-07-15 13:01:18.258065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:02.738 [2024-07-15 13:01:18.258076] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:02.738 [2024-07-15 13:01:18.258120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.405 00:19:03.405 Latency(us) 00:19:03.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.405 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:03.405 Verification LBA range: start 0x0 length 0x4000 00:19:03.405 NVMe0n1 : 8.17 1005.69 3.93 15.68 0.00 125152.60 3872.58 7015926.69 00:19:03.405 =================================================================================================================== 00:19:03.405 Total : 1005.69 3.93 15.68 0.00 125152.60 3872.58 7015926.69 00:19:03.405 0 00:19:03.969 13:01:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:03.969 13:01:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.969 13:01:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:04.226 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:04.226 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:04.226 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:04.226 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:04.482 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:04.482 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82179 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82155 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82155 ']' 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82155 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82155 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:04.483 killing process with pid 82155 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82155' 00:19:04.483 Received shutdown signal, test time was about 9.266272 seconds 00:19:04.483 00:19:04.483 Latency(us) 00:19:04.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.483 =================================================================================================================== 00:19:04.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82155 00:19:04.483 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82155 00:19:04.740 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.740 [2024-07-15 13:01:20.781620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82295 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82295 /var/tmp/bdevperf.sock 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82295 ']' 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.997 13:01:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:04.997 [2024-07-15 13:01:20.845020] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:04.997 [2024-07-15 13:01:20.845091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82295 ] 00:19:04.997 [2024-07-15 13:01:20.980530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.256 [2024-07-15 13:01:21.109548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.256 [2024-07-15 13:01:21.167016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:05.823 13:01:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.823 13:01:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:05.823 13:01:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:06.082 13:01:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:06.340 NVMe0n1 00:19:06.340 13:01:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82323 00:19:06.340 13:01:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.340 13:01:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:06.599 Running I/O for 10 seconds... 00:19:07.535 13:01:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.813 [2024-07-15 13:01:23.599518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.813 [2024-07-15 13:01:23.599574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.599981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.599992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.600002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.813 [2024-07-15 13:01:23.600013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.813 [2024-07-15 13:01:23.600022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.814 [2024-07-15 13:01:23.600743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.814 [2024-07-15 13:01:23.600757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.600983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.600992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.815 [2024-07-15 13:01:23.601449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.815 [2024-07-15 13:01:23.601459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.816 [2024-07-15 13:01:23.601984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.601996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.816 [2024-07-15 13:01:23.602182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.816 [2024-07-15 13:01:23.602191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.817 [2024-07-15 13:01:23.602211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.817 [2024-07-15 13:01:23.602231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.817 [2024-07-15 13:01:23.602255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.817 [2024-07-15 13:01:23.602276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.817 [2024-07-15 13:01:23.602296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.817 [2024-07-15 13:01:23.602321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24274d0 is same with the state(5) to be set 00:19:07.817 [2024-07-15 13:01:23.602344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:07.817 [2024-07-15 13:01:23.602352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:07.817 [2024-07-15 13:01:23.602371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:19:07.817 [2024-07-15 13:01:23.602382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602436] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24274d0 was disconnected and freed. reset controller. 00:19:07.817 [2024-07-15 13:01:23.602518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.817 [2024-07-15 13:01:23.602535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.817 [2024-07-15 13:01:23.602555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.817 [2024-07-15 13:01:23.602573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.817 [2024-07-15 13:01:23.602593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.817 [2024-07-15 13:01:23.602602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:07.817 [2024-07-15 13:01:23.602816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.817 [2024-07-15 13:01:23.602837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:07.817 [2024-07-15 13:01:23.602927] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.817 [2024-07-15 13:01:23.602948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dcd40 with addr=10.0.0.2, port=4420 00:19:07.817 [2024-07-15 13:01:23.602959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:07.817 [2024-07-15 13:01:23.602977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:07.817 [2024-07-15 13:01:23.602994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.817 [2024-07-15 13:01:23.603004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:07.817 [2024-07-15 13:01:23.603014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.817 [2024-07-15 13:01:23.603034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.817 [2024-07-15 13:01:23.603050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.817 13:01:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:08.754 [2024-07-15 13:01:24.603284] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.754 [2024-07-15 13:01:24.603399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dcd40 with addr=10.0.0.2, port=4420 00:19:08.754 [2024-07-15 13:01:24.603419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:08.754 [2024-07-15 13:01:24.603450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:08.754 [2024-07-15 13:01:24.603471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.754 [2024-07-15 13:01:24.603483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:08.754 [2024-07-15 13:01:24.603495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.754 [2024-07-15 13:01:24.603524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.754 [2024-07-15 13:01:24.603536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.754 13:01:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.011 [2024-07-15 13:01:24.856254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.011 13:01:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82323 00:19:09.574 [2024-07-15 13:01:25.617914] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:17.701 00:19:17.701 Latency(us) 00:19:17.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.701 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.701 Verification LBA range: start 0x0 length 0x4000 00:19:17.701 NVMe0n1 : 10.01 6452.40 25.20 0.00 0.00 19797.74 1578.82 3019898.88 00:19:17.701 =================================================================================================================== 00:19:17.701 Total : 6452.40 25.20 0.00 0.00 19797.74 1578.82 3019898.88 00:19:17.701 0 00:19:17.701 13:01:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82429 00:19:17.701 13:01:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:17.701 13:01:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:17.701 Running I/O for 10 seconds... 00:19:17.701 13:01:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.962 [2024-07-15 13:01:33.774328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.962 [2024-07-15 13:01:33.774424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.962 [2024-07-15 13:01:33.774481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.962 [2024-07-15 13:01:33.774501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.962 [2024-07-15 13:01:33.774520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:17.962 [2024-07-15 13:01:33.774810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.962 [2024-07-15 13:01:33.774845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.774876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.774898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.774918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.774938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.774958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.774996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.962 [2024-07-15 13:01:33.775548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.962 [2024-07-15 13:01:33.775557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.775990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.775999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.963 [2024-07-15 13:01:33.776406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.963 [2024-07-15 13:01:33.776416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.776989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.776998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.777019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.777039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.777068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.777088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.777108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.777128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.964 [2024-07-15 13:01:33.777148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.964 [2024-07-15 13:01:33.777169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.964 [2024-07-15 13:01:33.777189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.964 [2024-07-15 13:01:33.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.964 [2024-07-15 13:01:33.777230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.964 [2024-07-15 13:01:33.777250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.964 [2024-07-15 13:01:33.777270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.964 [2024-07-15 13:01:33.777281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.965 [2024-07-15 13:01:33.777470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.965 [2024-07-15 13:01:33.777490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2426120 is same with the state(5) to be set 00:19:17.965 [2024-07-15 13:01:33.777512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:17.965 [2024-07-15 13:01:33.777519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:17.965 [2024-07-15 13:01:33.777528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:19:17.965 [2024-07-15 13:01:33.777537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.965 [2024-07-15 13:01:33.777588] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2426120 was disconnected and freed. reset controller. 00:19:17.965 [2024-07-15 13:01:33.777809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.965 [2024-07-15 13:01:33.777838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:17.965 [2024-07-15 13:01:33.777927] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.965 [2024-07-15 13:01:33.777948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dcd40 with addr=10.0.0.2, port=4420 00:19:17.965 [2024-07-15 13:01:33.777958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:17.965 [2024-07-15 13:01:33.777977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:17.965 [2024-07-15 13:01:33.777993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.965 [2024-07-15 13:01:33.778002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:17.965 [2024-07-15 13:01:33.778013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.965 [2024-07-15 13:01:33.778033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.965 [2024-07-15 13:01:33.778052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.965 13:01:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:18.900 [2024-07-15 13:01:34.778181] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.900 [2024-07-15 13:01:34.778270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dcd40 with addr=10.0.0.2, port=4420 00:19:18.900 [2024-07-15 13:01:34.778286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:18.900 [2024-07-15 13:01:34.778311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:18.900 [2024-07-15 13:01:34.778331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.900 [2024-07-15 13:01:34.778342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:18.900 [2024-07-15 13:01:34.778370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.900 [2024-07-15 13:01:34.778407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.900 [2024-07-15 13:01:34.778420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.836 [2024-07-15 13:01:35.778566] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:19.836 [2024-07-15 13:01:35.778651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dcd40 with addr=10.0.0.2, port=4420 00:19:19.836 [2024-07-15 13:01:35.778667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:19.836 [2024-07-15 13:01:35.778708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:19.836 [2024-07-15 13:01:35.778729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:19.836 [2024-07-15 13:01:35.778738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:19.836 [2024-07-15 13:01:35.778750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.836 [2024-07-15 13:01:35.778790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:19.836 [2024-07-15 13:01:35.778800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.801 [2024-07-15 13:01:36.782151] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.801 [2024-07-15 13:01:36.782255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dcd40 with addr=10.0.0.2, port=4420 00:19:20.801 [2024-07-15 13:01:36.782271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dcd40 is same with the state(5) to be set 00:19:20.802 [2024-07-15 13:01:36.782532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dcd40 (9): Bad file descriptor 00:19:20.802 [2024-07-15 13:01:36.782808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:20.802 [2024-07-15 13:01:36.782829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:20.802 [2024-07-15 13:01:36.782842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:20.802 [2024-07-15 13:01:36.786562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:20.802 [2024-07-15 13:01:36.786591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.802 13:01:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.060 [2024-07-15 13:01:37.031958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.060 13:01:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82429 00:19:21.996 [2024-07-15 13:01:37.824044] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:27.263 00:19:27.263 Latency(us) 00:19:27.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.263 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.263 Verification LBA range: start 0x0 length 0x4000 00:19:27.263 NVMe0n1 : 10.01 5520.43 21.56 3721.71 0.00 13821.48 714.94 3019898.88 00:19:27.263 =================================================================================================================== 00:19:27.263 Total : 5520.43 21.56 3721.71 0.00 13821.48 0.00 3019898.88 00:19:27.263 0 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82295 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82295 ']' 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82295 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82295 00:19:27.263 killing process with pid 82295 00:19:27.263 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.263 00:19:27.263 Latency(us) 00:19:27.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.263 =================================================================================================================== 00:19:27.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82295' 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82295 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82295 00:19:27.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82538 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82538 /var/tmp/bdevperf.sock 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82538 ']' 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.263 13:01:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:27.263 [2024-07-15 13:01:42.964880] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:27.263 [2024-07-15 13:01:42.964981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82538 ] 00:19:27.263 [2024-07-15 13:01:43.108733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.263 [2024-07-15 13:01:43.217173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.263 [2024-07-15 13:01:43.268709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:28.194 13:01:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.194 13:01:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:28.194 13:01:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82554 00:19:28.194 13:01:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82538 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:28.194 13:01:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:28.194 13:01:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:28.452 NVMe0n1 00:19:28.452 13:01:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82594 00:19:28.452 13:01:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:28.452 13:01:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.709 Running I/O for 10 seconds... 00:19:29.642 13:01:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.901 [2024-07-15 13:01:45.715100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.901 [2024-07-15 13:01:45.715703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.901 [2024-07-15 13:01:45.715714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.715987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.715997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.902 [2024-07-15 13:01:45.716918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.902 [2024-07-15 13:01:45.716928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.716940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.716949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.716961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.716971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.716982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.716992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.903 [2024-07-15 13:01:45.717907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.717918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5310 is same with the state(5) to be set 00:19:29.903 [2024-07-15 13:01:45.717930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:29.903 [2024-07-15 13:01:45.717938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:29.903 [2024-07-15 13:01:45.717947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39992 len:8 PRP1 0x0 PRP2 0x0 00:19:29.903 [2024-07-15 13:01:45.717961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.903 [2024-07-15 13:01:45.718014] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18b5310 was disconnected and freed. reset controller. 00:19:29.903 [2024-07-15 13:01:45.718273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.903 [2024-07-15 13:01:45.718355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1846c00 (9): Bad file descriptor 00:19:29.903 [2024-07-15 13:01:45.718476] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.903 [2024-07-15 13:01:45.718507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1846c00 with addr=10.0.0.2, port=4420 00:19:29.903 [2024-07-15 13:01:45.718519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1846c00 is same with the state(5) to be set 00:19:29.903 [2024-07-15 13:01:45.718541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1846c00 (9): Bad file descriptor 00:19:29.903 [2024-07-15 13:01:45.718558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.903 [2024-07-15 13:01:45.718568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:29.903 [2024-07-15 13:01:45.718579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.903 [2024-07-15 13:01:45.718599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.903 [2024-07-15 13:01:45.718610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.903 13:01:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82594 00:19:31.800 [2024-07-15 13:01:47.718827] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.800 [2024-07-15 13:01:47.718886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1846c00 with addr=10.0.0.2, port=4420 00:19:31.800 [2024-07-15 13:01:47.718903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1846c00 is same with the state(5) to be set 00:19:31.800 [2024-07-15 13:01:47.718935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1846c00 (9): Bad file descriptor 00:19:31.800 [2024-07-15 13:01:47.718954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:31.800 [2024-07-15 13:01:47.718964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:31.800 [2024-07-15 13:01:47.718975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:31.800 [2024-07-15 13:01:47.719004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:31.800 [2024-07-15 13:01:47.719016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.703 [2024-07-15 13:01:49.719319] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.703 [2024-07-15 13:01:49.719417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1846c00 with addr=10.0.0.2, port=4420 00:19:33.703 [2024-07-15 13:01:49.719436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1846c00 is same with the state(5) to be set 00:19:33.703 [2024-07-15 13:01:49.719464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1846c00 (9): Bad file descriptor 00:19:33.703 [2024-07-15 13:01:49.719484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.703 [2024-07-15 13:01:49.719495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:33.703 [2024-07-15 13:01:49.719507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.703 [2024-07-15 13:01:49.719535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.703 [2024-07-15 13:01:49.719546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.233 [2024-07-15 13:01:51.719654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.233 [2024-07-15 13:01:51.719715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.233 [2024-07-15 13:01:51.719729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:36.233 [2024-07-15 13:01:51.719741] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:36.233 [2024-07-15 13:01:51.719768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:36.800 00:19:36.800 Latency(us) 00:19:36.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.800 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:36.800 NVMe0n1 : 8.10 2068.79 8.08 15.80 0.00 61354.16 8162.21 7015926.69 00:19:36.800 =================================================================================================================== 00:19:36.800 Total : 2068.79 8.08 15.80 0.00 61354.16 8162.21 7015926.69 00:19:36.800 0 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.800 Attaching 5 probes... 00:19:36.800 1231.784047: reset bdev controller NVMe0 00:19:36.800 1231.931265: reconnect bdev controller NVMe0 00:19:36.800 3232.223707: reconnect delay bdev controller NVMe0 00:19:36.800 3232.244282: reconnect bdev controller NVMe0 00:19:36.800 5232.686510: reconnect delay bdev controller NVMe0 00:19:36.800 5232.708527: reconnect bdev controller NVMe0 00:19:36.800 7233.140351: reconnect delay bdev controller NVMe0 00:19:36.800 7233.165481: reconnect bdev controller NVMe0 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82554 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82538 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82538 ']' 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82538 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82538 00:19:36.800 killing process with pid 82538 00:19:36.800 Received shutdown signal, test time was about 8.163153 seconds 00:19:36.800 00:19:36.800 Latency(us) 00:19:36.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.800 =================================================================================================================== 00:19:36.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82538' 00:19:36.800 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82538 00:19:36.801 13:01:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82538 00:19:37.059 13:01:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.317 rmmod nvme_tcp 00:19:37.317 rmmod nvme_fabrics 00:19:37.317 rmmod nvme_keyring 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82106 ']' 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82106 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82106 ']' 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82106 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.317 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82106 00:19:37.575 killing process with pid 82106 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82106' 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82106 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82106 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.575 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.833 13:01:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:37.833 00:19:37.833 real 0m47.037s 00:19:37.833 user 2m18.386s 00:19:37.833 sys 0m5.582s 00:19:37.833 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:37.833 13:01:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 ************************************ 00:19:37.833 END TEST nvmf_timeout 00:19:37.833 ************************************ 00:19:37.833 13:01:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:37.833 13:01:53 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:19:37.833 13:01:53 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:19:37.833 13:01:53 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.833 13:01:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 13:01:53 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:19:37.833 00:19:37.833 real 12m15.854s 00:19:37.833 user 29m55.441s 00:19:37.833 sys 3m1.585s 00:19:37.833 13:01:53 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:37.833 13:01:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:37.833 ************************************ 00:19:37.833 END TEST nvmf_tcp 00:19:37.833 ************************************ 00:19:37.833 13:01:53 -- common/autotest_common.sh@1142 -- # return 0 00:19:37.833 13:01:53 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:19:37.833 13:01:53 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:37.834 13:01:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:37.834 13:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.834 13:01:53 -- common/autotest_common.sh@10 -- # set +x 00:19:37.834 ************************************ 00:19:37.834 START TEST nvmf_dif 00:19:37.834 ************************************ 00:19:37.834 13:01:53 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:37.834 * Looking for test storage... 00:19:37.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:37.834 13:01:53 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.834 13:01:53 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.834 13:01:53 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.834 13:01:53 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.834 13:01:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.834 13:01:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.834 13:01:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.834 13:01:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:37.834 13:01:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.834 13:01:53 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.093 13:01:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:38.093 13:01:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:38.093 13:01:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:38.093 13:01:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:38.093 13:01:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.093 13:01:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:38.093 13:01:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:38.093 Cannot find device "nvmf_tgt_br" 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.093 Cannot find device "nvmf_tgt_br2" 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:38.093 Cannot find device "nvmf_tgt_br" 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:38.093 Cannot find device "nvmf_tgt_br2" 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:38.093 13:01:53 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:38.093 13:01:54 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:38.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:38.352 00:19:38.352 --- 10.0.0.2 ping statistics --- 00:19:38.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.352 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:38.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:38.352 00:19:38.352 --- 10.0.0.3 ping statistics --- 00:19:38.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.352 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:38.352 00:19:38.352 --- 10.0.0.1 ping statistics --- 00:19:38.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.352 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:38.352 13:01:54 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:38.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:38.610 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.610 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.610 13:01:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.610 13:01:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:38.610 13:01:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:38.610 13:01:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.611 13:01:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:38.611 13:01:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:38.611 13:01:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:38.611 13:01:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:38.611 13:01:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.611 13:01:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83028 00:19:38.611 13:01:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:38.611 13:01:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83028 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83028 ']' 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.611 13:01:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.869 [2024-07-15 13:01:54.683124] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:38.869 [2024-07-15 13:01:54.683226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.869 [2024-07-15 13:01:54.824981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.127 [2024-07-15 13:01:54.947798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.127 [2024-07-15 13:01:54.947861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.127 [2024-07-15 13:01:54.947875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.127 [2024-07-15 13:01:54.947885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.127 [2024-07-15 13:01:54.947895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.127 [2024-07-15 13:01:54.947925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.127 [2024-07-15 13:01:55.002416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:19:39.723 13:01:55 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 13:01:55 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.723 13:01:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:39.723 13:01:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 [2024-07-15 13:01:55.734756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.723 13:01:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.723 13:01:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 ************************************ 00:19:39.723 START TEST fio_dif_1_default 00:19:39.723 ************************************ 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 bdev_null0 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.723 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 [2024-07-15 13:01:55.782846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.981 { 00:19:39.981 "params": { 00:19:39.981 "name": "Nvme$subsystem", 00:19:39.981 "trtype": "$TEST_TRANSPORT", 00:19:39.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.981 "adrfam": "ipv4", 00:19:39.981 "trsvcid": "$NVMF_PORT", 00:19:39.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.981 "hdgst": ${hdgst:-false}, 00:19:39.981 "ddgst": ${ddgst:-false} 00:19:39.981 }, 00:19:39.981 "method": "bdev_nvme_attach_controller" 00:19:39.981 } 00:19:39.981 EOF 00:19:39.981 )") 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:39.981 "params": { 00:19:39.981 "name": "Nvme0", 00:19:39.981 "trtype": "tcp", 00:19:39.981 "traddr": "10.0.0.2", 00:19:39.981 "adrfam": "ipv4", 00:19:39.981 "trsvcid": "4420", 00:19:39.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:39.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:39.981 "hdgst": false, 00:19:39.981 "ddgst": false 00:19:39.981 }, 00:19:39.981 "method": "bdev_nvme_attach_controller" 00:19:39.981 }' 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:39.981 13:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.981 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:39.981 fio-3.35 00:19:39.981 Starting 1 thread 00:19:52.174 00:19:52.174 filename0: (groupid=0, jobs=1): err= 0: pid=83095: Mon Jul 15 13:02:06 2024 00:19:52.174 read: IOPS=8904, BW=34.8MiB/s (36.5MB/s)(348MiB/10001msec) 00:19:52.174 slat (nsec): min=6593, max=53430, avg=8527.54, stdev=2562.26 00:19:52.174 clat (usec): min=346, max=2015, avg=424.25, stdev=27.49 00:19:52.174 lat (usec): min=352, max=2026, avg=432.78, stdev=28.18 00:19:52.174 clat percentiles (usec): 00:19:52.174 | 1.00th=[ 359], 5.00th=[ 383], 10.00th=[ 400], 20.00th=[ 408], 00:19:52.174 | 30.00th=[ 416], 40.00th=[ 420], 50.00th=[ 424], 60.00th=[ 429], 00:19:52.174 | 70.00th=[ 437], 80.00th=[ 441], 90.00th=[ 449], 95.00th=[ 461], 00:19:52.174 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 529], 99.95th=[ 553], 00:19:52.174 | 99.99th=[ 1074] 00:19:52.174 bw ( KiB/s): min=34624, max=37376, per=100.00%, avg=35646.32, stdev=597.00, samples=19 00:19:52.174 iops : min= 8656, max= 9344, avg=8911.58, stdev=149.25, samples=19 00:19:52.174 lat (usec) : 500=99.49%, 750=0.49%, 1000=0.01% 00:19:52.174 lat (msec) : 2=0.01%, 4=0.01% 00:19:52.174 cpu : usr=85.29%, sys=12.88%, ctx=26, majf=0, minf=0 00:19:52.174 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.174 issued rwts: total=89056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.174 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:52.174 00:19:52.174 Run status group 0 (all jobs): 00:19:52.174 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=348MiB (365MB), run=10001-10001msec 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.174 00:19:52.174 real 0m11.002s 00:19:52.174 user 0m9.162s 00:19:52.174 sys 0m1.549s 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.174 ************************************ 00:19:52.174 END TEST fio_dif_1_default 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:52.174 ************************************ 00:19:52.174 13:02:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:52.174 13:02:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:52.174 13:02:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:52.174 13:02:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.174 13:02:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:52.174 ************************************ 00:19:52.174 START TEST fio_dif_1_multi_subsystems 00:19:52.174 ************************************ 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.174 bdev_null0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.174 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.175 [2024-07-15 13:02:06.835314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.175 bdev_null1 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:52.175 { 00:19:52.175 "params": { 00:19:52.175 "name": "Nvme$subsystem", 00:19:52.175 "trtype": "$TEST_TRANSPORT", 00:19:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.175 "adrfam": "ipv4", 00:19:52.175 "trsvcid": "$NVMF_PORT", 00:19:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.175 "hdgst": ${hdgst:-false}, 00:19:52.175 "ddgst": ${ddgst:-false} 00:19:52.175 }, 00:19:52.175 "method": "bdev_nvme_attach_controller" 00:19:52.175 } 00:19:52.175 EOF 00:19:52.175 )") 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:52.175 { 00:19:52.175 "params": { 00:19:52.175 "name": "Nvme$subsystem", 00:19:52.175 "trtype": "$TEST_TRANSPORT", 00:19:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.175 "adrfam": "ipv4", 00:19:52.175 "trsvcid": "$NVMF_PORT", 00:19:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.175 "hdgst": ${hdgst:-false}, 00:19:52.175 "ddgst": ${ddgst:-false} 00:19:52.175 }, 00:19:52.175 "method": "bdev_nvme_attach_controller" 00:19:52.175 } 00:19:52.175 EOF 00:19:52.175 )") 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:52.175 "params": { 00:19:52.175 "name": "Nvme0", 00:19:52.175 "trtype": "tcp", 00:19:52.175 "traddr": "10.0.0.2", 00:19:52.175 "adrfam": "ipv4", 00:19:52.175 "trsvcid": "4420", 00:19:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:52.175 "hdgst": false, 00:19:52.175 "ddgst": false 00:19:52.175 }, 00:19:52.175 "method": "bdev_nvme_attach_controller" 00:19:52.175 },{ 00:19:52.175 "params": { 00:19:52.175 "name": "Nvme1", 00:19:52.175 "trtype": "tcp", 00:19:52.175 "traddr": "10.0.0.2", 00:19:52.175 "adrfam": "ipv4", 00:19:52.175 "trsvcid": "4420", 00:19:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.175 "hdgst": false, 00:19:52.175 "ddgst": false 00:19:52.175 }, 00:19:52.175 "method": "bdev_nvme_attach_controller" 00:19:52.175 }' 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:52.175 13:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.175 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:52.175 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:52.175 fio-3.35 00:19:52.175 Starting 2 threads 00:20:02.162 00:20:02.162 filename0: (groupid=0, jobs=1): err= 0: pid=83254: Mon Jul 15 13:02:17 2024 00:20:02.162 read: IOPS=5013, BW=19.6MiB/s (20.5MB/s)(196MiB/10001msec) 00:20:02.162 slat (nsec): min=6626, max=65997, avg=13075.15, stdev=4373.30 00:20:02.162 clat (usec): min=586, max=1363, avg=762.27, stdev=58.07 00:20:02.162 lat (usec): min=593, max=1389, avg=775.34, stdev=59.29 00:20:02.162 clat percentiles (usec): 00:20:02.162 | 1.00th=[ 635], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 709], 00:20:02.162 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 758], 60.00th=[ 775], 00:20:02.162 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:20:02.162 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 963], 00:20:02.162 | 99.99th=[ 1106] 00:20:02.162 bw ( KiB/s): min=19200, max=20864, per=50.04%, avg=20069.05, stdev=532.61, samples=19 00:20:02.162 iops : min= 4800, max= 5216, avg=5017.26, stdev=133.15, samples=19 00:20:02.162 lat (usec) : 750=42.76%, 1000=57.22% 00:20:02.162 lat (msec) : 2=0.02% 00:20:02.162 cpu : usr=89.62%, sys=8.95%, ctx=15, majf=0, minf=0 00:20:02.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.162 issued rwts: total=50140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:02.162 filename1: (groupid=0, jobs=1): err= 0: pid=83255: Mon Jul 15 13:02:17 2024 00:20:02.162 read: IOPS=5013, BW=19.6MiB/s (20.5MB/s)(196MiB/10001msec) 00:20:02.162 slat (nsec): min=6476, max=56715, avg=13077.61, stdev=4352.99 00:20:02.162 clat (usec): min=451, max=1452, avg=762.06, stdev=50.81 00:20:02.162 lat (usec): min=458, max=1498, avg=775.14, stdev=51.31 00:20:02.162 clat percentiles (usec): 00:20:02.162 | 1.00th=[ 660], 5.00th=[ 685], 10.00th=[ 693], 20.00th=[ 717], 00:20:02.162 | 30.00th=[ 734], 40.00th=[ 750], 50.00th=[ 766], 60.00th=[ 775], 00:20:02.162 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 848], 00:20:02.162 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 930], 00:20:02.162 | 99.99th=[ 996] 00:20:02.162 bw ( KiB/s): min=19200, max=20864, per=50.04%, avg=20070.74, stdev=528.90, samples=19 00:20:02.162 iops : min= 4800, max= 5216, avg=5017.68, stdev=132.22, samples=19 00:20:02.162 lat (usec) : 500=0.01%, 750=41.22%, 1000=58.77% 00:20:02.162 lat (msec) : 2=0.01% 00:20:02.162 cpu : usr=90.40%, sys=8.23%, ctx=12, majf=0, minf=0 00:20:02.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.162 issued rwts: total=50144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:02.162 00:20:02.162 Run status group 0 (all jobs): 00:20:02.162 READ: bw=39.2MiB/s (41.1MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=392MiB (411MB), run=10001-10001msec 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.162 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 00:20:02.163 real 0m11.122s 00:20:02.163 user 0m18.762s 00:20:02.163 sys 0m1.995s 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.163 ************************************ 00:20:02.163 END TEST fio_dif_1_multi_subsystems 00:20:02.163 ************************************ 00:20:02.163 13:02:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 13:02:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:02.163 13:02:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:02.163 13:02:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:02.163 13:02:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.163 13:02:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 ************************************ 00:20:02.163 START TEST fio_dif_rand_params 00:20:02.163 ************************************ 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 bdev_null0 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.163 13:02:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 [2024-07-15 13:02:18.007539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.163 { 00:20:02.163 "params": { 00:20:02.163 "name": "Nvme$subsystem", 00:20:02.163 "trtype": "$TEST_TRANSPORT", 00:20:02.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.163 "adrfam": "ipv4", 00:20:02.163 "trsvcid": "$NVMF_PORT", 00:20:02.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.163 "hdgst": ${hdgst:-false}, 00:20:02.163 "ddgst": ${ddgst:-false} 00:20:02.163 }, 00:20:02.163 "method": "bdev_nvme_attach_controller" 00:20:02.163 } 00:20:02.163 EOF 00:20:02.163 )") 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:02.163 "params": { 00:20:02.163 "name": "Nvme0", 00:20:02.163 "trtype": "tcp", 00:20:02.163 "traddr": "10.0.0.2", 00:20:02.163 "adrfam": "ipv4", 00:20:02.163 "trsvcid": "4420", 00:20:02.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:02.163 "hdgst": false, 00:20:02.163 "ddgst": false 00:20:02.163 }, 00:20:02.163 "method": "bdev_nvme_attach_controller" 00:20:02.163 }' 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.163 13:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.163 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:02.163 ... 00:20:02.163 fio-3.35 00:20:02.163 Starting 3 threads 00:20:08.722 00:20:08.722 filename0: (groupid=0, jobs=1): err= 0: pid=83415: Mon Jul 15 13:02:23 2024 00:20:08.722 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5003msec) 00:20:08.722 slat (nsec): min=6945, max=44120, avg=10296.13, stdev=4584.25 00:20:08.722 clat (usec): min=10826, max=13523, avg=11609.95, stdev=278.76 00:20:08.722 lat (usec): min=10834, max=13544, avg=11620.24, stdev=279.14 00:20:08.722 clat percentiles (usec): 00:20:08.722 | 1.00th=[10945], 5.00th=[11076], 10.00th=[11076], 20.00th=[11469], 00:20:08.722 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:20:08.722 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11863], 00:20:08.722 | 99.00th=[11994], 99.50th=[11994], 99.90th=[13566], 99.95th=[13566], 00:20:08.722 | 99.99th=[13566] 00:20:08.722 bw ( KiB/s): min=32256, max=33792, per=33.28%, avg=32947.20, stdev=435.95, samples=10 00:20:08.722 iops : min= 252, max= 264, avg=257.40, stdev= 3.41, samples=10 00:20:08.722 lat (msec) : 20=100.00% 00:20:08.722 cpu : usr=91.16%, sys=8.26%, ctx=11, majf=0, minf=9 00:20:08.722 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.722 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.722 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.722 filename0: (groupid=0, jobs=1): err= 0: pid=83416: Mon Jul 15 13:02:23 2024 00:20:08.722 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5001msec) 00:20:08.722 slat (nsec): min=7784, max=50197, avg=14490.62, stdev=4037.62 00:20:08.722 clat (usec): min=9473, max=13109, avg=11597.94, stdev=294.84 00:20:08.722 lat (usec): min=9486, max=13135, avg=11612.43, stdev=294.91 00:20:08.722 clat percentiles (usec): 00:20:08.722 | 1.00th=[10814], 5.00th=[11076], 10.00th=[11076], 20.00th=[11469], 00:20:08.722 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:20:08.722 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11863], 00:20:08.722 | 99.00th=[11994], 99.50th=[11994], 99.90th=[13042], 99.95th=[13173], 00:20:08.722 | 99.99th=[13173] 00:20:08.722 bw ( KiB/s): min=32256, max=33024, per=33.18%, avg=32853.33, stdev=338.66, samples=9 00:20:08.722 iops : min= 252, max= 258, avg=256.67, stdev= 2.65, samples=9 00:20:08.722 lat (msec) : 10=0.23%, 20=99.77% 00:20:08.722 cpu : usr=91.54%, sys=7.84%, ctx=7, majf=0, minf=9 00:20:08.722 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.722 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.722 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.722 filename0: (groupid=0, jobs=1): err= 0: pid=83417: Mon Jul 15 13:02:23 2024 00:20:08.722 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5001msec) 00:20:08.722 slat (nsec): min=7421, max=49981, avg=14717.03, stdev=3950.69 00:20:08.722 clat (usec): min=9505, max=12985, avg=11596.67, stdev=291.44 00:20:08.722 lat (usec): min=9517, max=13004, avg=11611.39, stdev=291.56 00:20:08.722 clat percentiles (usec): 00:20:08.722 | 1.00th=[10945], 5.00th=[11076], 10.00th=[11076], 20.00th=[11469], 00:20:08.722 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:20:08.722 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11863], 00:20:08.723 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12911], 99.95th=[13042], 00:20:08.723 | 99.99th=[13042] 00:20:08.723 bw ( KiB/s): min=32256, max=33024, per=33.18%, avg=32853.33, stdev=338.66, samples=9 00:20:08.723 iops : min= 252, max= 258, avg=256.67, stdev= 2.65, samples=9 00:20:08.723 lat (msec) : 10=0.23%, 20=99.77% 00:20:08.723 cpu : usr=91.34%, sys=8.06%, ctx=38, majf=0, minf=9 00:20:08.723 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.723 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.723 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.723 00:20:08.723 Run status group 0 (all jobs): 00:20:08.723 READ: bw=96.7MiB/s (101MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=484MiB (507MB), run=5001-5003msec 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 bdev_null0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 [2024-07-15 13:02:23.988928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 bdev_null1 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 bdev_null2 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.723 { 00:20:08.723 "params": { 00:20:08.723 "name": "Nvme$subsystem", 00:20:08.723 "trtype": "$TEST_TRANSPORT", 00:20:08.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.723 "adrfam": "ipv4", 00:20:08.723 "trsvcid": "$NVMF_PORT", 00:20:08.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.723 "hdgst": ${hdgst:-false}, 00:20:08.723 "ddgst": ${ddgst:-false} 00:20:08.723 }, 00:20:08.723 "method": "bdev_nvme_attach_controller" 00:20:08.723 } 00:20:08.723 EOF 00:20:08.723 )") 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.723 { 00:20:08.723 "params": { 00:20:08.723 "name": "Nvme$subsystem", 00:20:08.723 "trtype": "$TEST_TRANSPORT", 00:20:08.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.723 "adrfam": "ipv4", 00:20:08.723 "trsvcid": "$NVMF_PORT", 00:20:08.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.723 "hdgst": ${hdgst:-false}, 00:20:08.723 "ddgst": ${ddgst:-false} 00:20:08.723 }, 00:20:08.723 "method": "bdev_nvme_attach_controller" 00:20:08.723 } 00:20:08.723 EOF 00:20:08.723 )") 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.723 { 00:20:08.723 "params": { 00:20:08.723 "name": "Nvme$subsystem", 00:20:08.723 "trtype": "$TEST_TRANSPORT", 00:20:08.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.723 "adrfam": "ipv4", 00:20:08.723 "trsvcid": "$NVMF_PORT", 00:20:08.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.723 "hdgst": ${hdgst:-false}, 00:20:08.723 "ddgst": ${ddgst:-false} 00:20:08.723 }, 00:20:08.723 "method": "bdev_nvme_attach_controller" 00:20:08.723 } 00:20:08.723 EOF 00:20:08.723 )") 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:08.723 "params": { 00:20:08.723 "name": "Nvme0", 00:20:08.723 "trtype": "tcp", 00:20:08.723 "traddr": "10.0.0.2", 00:20:08.723 "adrfam": "ipv4", 00:20:08.723 "trsvcid": "4420", 00:20:08.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:08.723 "hdgst": false, 00:20:08.723 "ddgst": false 00:20:08.723 }, 00:20:08.723 "method": "bdev_nvme_attach_controller" 00:20:08.723 },{ 00:20:08.723 "params": { 00:20:08.723 "name": "Nvme1", 00:20:08.723 "trtype": "tcp", 00:20:08.723 "traddr": "10.0.0.2", 00:20:08.723 "adrfam": "ipv4", 00:20:08.723 "trsvcid": "4420", 00:20:08.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.723 "hdgst": false, 00:20:08.723 "ddgst": false 00:20:08.723 }, 00:20:08.723 "method": "bdev_nvme_attach_controller" 00:20:08.723 },{ 00:20:08.723 "params": { 00:20:08.723 "name": "Nvme2", 00:20:08.723 "trtype": "tcp", 00:20:08.723 "traddr": "10.0.0.2", 00:20:08.723 "adrfam": "ipv4", 00:20:08.723 "trsvcid": "4420", 00:20:08.723 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.723 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:08.723 "hdgst": false, 00:20:08.723 "ddgst": false 00:20:08.723 }, 00:20:08.723 "method": "bdev_nvme_attach_controller" 00:20:08.723 }' 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:08.723 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.724 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.724 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:08.724 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:08.724 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:08.724 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:08.724 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:08.724 13:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.724 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:08.724 ... 00:20:08.724 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:08.724 ... 00:20:08.724 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:08.724 ... 00:20:08.724 fio-3.35 00:20:08.724 Starting 24 threads 00:20:20.931 00:20:20.931 filename0: (groupid=0, jobs=1): err= 0: pid=83513: Mon Jul 15 13:02:35 2024 00:20:20.931 read: IOPS=240, BW=964KiB/s (987kB/s)(9692KiB/10057msec) 00:20:20.931 slat (usec): min=4, max=4022, avg=16.10, stdev=99.85 00:20:20.931 clat (usec): min=1122, max=144002, avg=66296.83, stdev=23169.04 00:20:20.931 lat (usec): min=1129, max=144015, avg=66312.92, stdev=23170.24 00:20:20.931 clat percentiles (msec): 00:20:20.931 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 45], 20.00th=[ 51], 00:20:20.931 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 72], 00:20:20.931 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 95], 95.00th=[ 104], 00:20:20.931 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:20.931 | 99.99th=[ 144] 00:20:20.931 bw ( KiB/s): min= 656, max= 2032, per=4.30%, avg=962.80, stdev=261.57, samples=20 00:20:20.931 iops : min= 164, max= 508, avg=240.70, stdev=65.39, samples=20 00:20:20.931 lat (msec) : 2=0.66%, 4=1.86%, 10=2.77%, 50=15.15%, 100=73.38% 00:20:20.931 lat (msec) : 250=6.19% 00:20:20.931 cpu : usr=42.82%, sys=2.65%, ctx=1504, majf=0, minf=9 00:20:20.931 IO depths : 1=0.2%, 2=1.8%, 4=6.2%, 8=76.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:20.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 issued rwts: total=2423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.931 filename0: (groupid=0, jobs=1): err= 0: pid=83514: Mon Jul 15 13:02:35 2024 00:20:20.931 read: IOPS=238, BW=955KiB/s (978kB/s)(9592KiB/10041msec) 00:20:20.931 slat (usec): min=6, max=6031, avg=19.00, stdev=147.81 00:20:20.931 clat (msec): min=17, max=147, avg=66.85, stdev=18.36 00:20:20.931 lat (msec): min=17, max=147, avg=66.87, stdev=18.37 00:20:20.931 clat percentiles (msec): 00:20:20.931 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:20:20.931 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:20:20.931 | 70.00th=[ 74], 80.00th=[ 80], 90.00th=[ 91], 95.00th=[ 102], 00:20:20.931 | 99.00th=[ 116], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 148], 00:20:20.931 | 99.99th=[ 148] 00:20:20.931 bw ( KiB/s): min= 840, max= 1152, per=4.26%, avg=952.80, stdev=74.73, samples=20 00:20:20.931 iops : min= 210, max= 288, avg=238.20, stdev=18.68, samples=20 00:20:20.931 lat (msec) : 20=0.58%, 50=20.35%, 100=73.73%, 250=5.34% 00:20:20.931 cpu : usr=42.28%, sys=2.83%, ctx=1592, majf=0, minf=9 00:20:20.931 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:20.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.931 filename0: (groupid=0, jobs=1): err= 0: pid=83515: Mon Jul 15 13:02:35 2024 00:20:20.931 read: IOPS=230, BW=921KiB/s (943kB/s)(9220KiB/10013msec) 00:20:20.931 slat (usec): min=8, max=8029, avg=26.30, stdev=300.73 00:20:20.931 clat (msec): min=16, max=144, avg=69.35, stdev=18.54 00:20:20.931 lat (msec): min=20, max=144, avg=69.37, stdev=18.53 00:20:20.931 clat percentiles (msec): 00:20:20.931 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:20:20.931 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.931 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 106], 00:20:20.931 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:20.931 | 99.99th=[ 144] 00:20:20.931 bw ( KiB/s): min= 640, max= 1024, per=4.10%, avg=917.95, stdev=102.99, samples=20 00:20:20.931 iops : min= 160, max= 256, avg=229.45, stdev=25.80, samples=20 00:20:20.931 lat (msec) : 20=0.04%, 50=19.57%, 100=74.97%, 250=5.42% 00:20:20.931 cpu : usr=31.31%, sys=1.90%, ctx=870, majf=0, minf=9 00:20:20.931 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=78.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:20.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.931 filename0: (groupid=0, jobs=1): err= 0: pid=83516: Mon Jul 15 13:02:35 2024 00:20:20.931 read: IOPS=236, BW=948KiB/s (971kB/s)(9504KiB/10026msec) 00:20:20.931 slat (usec): min=3, max=8030, avg=34.12, stdev=402.11 00:20:20.931 clat (msec): min=24, max=134, avg=67.32, stdev=17.44 00:20:20.931 lat (msec): min=25, max=134, avg=67.35, stdev=17.44 00:20:20.931 clat percentiles (msec): 00:20:20.931 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:20:20.931 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 72], 00:20:20.931 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 96], 00:20:20.931 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:20.931 | 99.99th=[ 136] 00:20:20.931 bw ( KiB/s): min= 824, max= 1080, per=4.23%, avg=946.45, stdev=73.44, samples=20 00:20:20.931 iops : min= 206, max= 270, avg=236.60, stdev=18.34, samples=20 00:20:20.931 lat (msec) : 50=23.95%, 100=72.94%, 250=3.11% 00:20:20.931 cpu : usr=30.86%, sys=2.08%, ctx=880, majf=0, minf=9 00:20:20.931 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:20.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.931 filename0: (groupid=0, jobs=1): err= 0: pid=83517: Mon Jul 15 13:02:35 2024 00:20:20.931 read: IOPS=243, BW=975KiB/s (999kB/s)(9776KiB/10022msec) 00:20:20.931 slat (usec): min=4, max=8024, avg=24.70, stdev=256.17 00:20:20.931 clat (msec): min=23, max=124, avg=65.45, stdev=17.50 00:20:20.931 lat (msec): min=23, max=124, avg=65.48, stdev=17.50 00:20:20.931 clat percentiles (msec): 00:20:20.931 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:20:20.931 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:20:20.931 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 91], 95.00th=[ 97], 00:20:20.931 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 125], 99.95th=[ 125], 00:20:20.931 | 99.99th=[ 125] 00:20:20.931 bw ( KiB/s): min= 840, max= 1128, per=4.35%, avg=974.00, stdev=73.77, samples=20 00:20:20.931 iops : min= 210, max= 282, avg=243.50, stdev=18.44, samples=20 00:20:20.931 lat (msec) : 50=25.37%, 100=70.74%, 250=3.89% 00:20:20.931 cpu : usr=41.73%, sys=2.55%, ctx=1283, majf=0, minf=9 00:20:20.931 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:20.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.931 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.931 filename0: (groupid=0, jobs=1): err= 0: pid=83518: Mon Jul 15 13:02:35 2024 00:20:20.931 read: IOPS=232, BW=932KiB/s (954kB/s)(9360KiB/10044msec) 00:20:20.931 slat (usec): min=5, max=9023, avg=36.65, stdev=368.28 00:20:20.931 clat (msec): min=17, max=132, avg=68.45, stdev=17.63 00:20:20.931 lat (msec): min=17, max=132, avg=68.49, stdev=17.63 00:20:20.931 clat percentiles (msec): 00:20:20.931 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 52], 00:20:20.931 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:20:20.931 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 100], 00:20:20.931 | 99.00th=[ 114], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 133], 00:20:20.931 | 99.99th=[ 133] 00:20:20.931 bw ( KiB/s): min= 720, max= 1152, per=4.16%, avg=929.60, stdev=94.21, samples=20 00:20:20.932 iops : min= 180, max= 288, avg=232.40, stdev=23.55, samples=20 00:20:20.932 lat (msec) : 20=0.60%, 50=16.79%, 100=78.55%, 250=4.06% 00:20:20.932 cpu : usr=41.34%, sys=2.49%, ctx=1387, majf=0, minf=9 00:20:20.932 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename0: (groupid=0, jobs=1): err= 0: pid=83519: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=225, BW=902KiB/s (923kB/s)(9052KiB/10041msec) 00:20:20.932 slat (usec): min=6, max=8030, avg=26.79, stdev=336.66 00:20:20.932 clat (msec): min=22, max=144, avg=70.79, stdev=17.49 00:20:20.932 lat (msec): min=22, max=144, avg=70.81, stdev=17.49 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:20:20.932 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.932 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 101], 00:20:20.932 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:20:20.932 | 99.99th=[ 144] 00:20:20.932 bw ( KiB/s): min= 776, max= 1024, per=4.02%, avg=898.80, stdev=57.53, samples=20 00:20:20.932 iops : min= 194, max= 256, avg=224.70, stdev=14.38, samples=20 00:20:20.932 lat (msec) : 50=17.06%, 100=78.08%, 250=4.86% 00:20:20.932 cpu : usr=31.40%, sys=1.97%, ctx=879, majf=0, minf=9 00:20:20.932 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=79.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=88.6%, 8=10.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename0: (groupid=0, jobs=1): err= 0: pid=83520: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=239, BW=958KiB/s (981kB/s)(9596KiB/10016msec) 00:20:20.932 slat (usec): min=4, max=8028, avg=20.73, stdev=231.37 00:20:20.932 clat (msec): min=20, max=149, avg=66.71, stdev=17.77 00:20:20.932 lat (msec): min=20, max=149, avg=66.73, stdev=17.78 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 48], 00:20:20.932 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:20.932 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 96], 00:20:20.932 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 123], 99.95th=[ 150], 00:20:20.932 | 99.99th=[ 150] 00:20:20.932 bw ( KiB/s): min= 776, max= 1096, per=4.26%, avg=952.90, stdev=80.03, samples=20 00:20:20.932 iops : min= 194, max= 274, avg=238.20, stdev=20.02, samples=20 00:20:20.932 lat (msec) : 50=27.09%, 100=68.95%, 250=3.96% 00:20:20.932 cpu : usr=32.45%, sys=2.44%, ctx=892, majf=0, minf=9 00:20:20.932 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename1: (groupid=0, jobs=1): err= 0: pid=83521: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=226, BW=905KiB/s (927kB/s)(9072KiB/10026msec) 00:20:20.932 slat (usec): min=8, max=8024, avg=24.63, stdev=291.89 00:20:20.932 clat (msec): min=23, max=144, avg=70.51, stdev=17.48 00:20:20.932 lat (msec): min=23, max=144, avg=70.53, stdev=17.47 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:20:20.932 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.932 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 99], 00:20:20.932 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:20:20.932 | 99.99th=[ 144] 00:20:20.932 bw ( KiB/s): min= 832, max= 992, per=4.04%, avg=903.20, stdev=47.07, samples=20 00:20:20.932 iops : min= 208, max= 248, avg=225.80, stdev=11.77, samples=20 00:20:20.932 lat (msec) : 50=16.27%, 100=79.67%, 250=4.06% 00:20:20.932 cpu : usr=31.15%, sys=2.07%, ctx=861, majf=0, minf=9 00:20:20.932 IO depths : 1=0.1%, 2=0.7%, 4=3.0%, 8=79.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename1: (groupid=0, jobs=1): err= 0: pid=83522: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=234, BW=937KiB/s (960kB/s)(9412KiB/10040msec) 00:20:20.932 slat (usec): min=6, max=8023, avg=30.61, stdev=368.90 00:20:20.932 clat (msec): min=16, max=131, avg=68.10, stdev=17.92 00:20:20.932 lat (msec): min=16, max=131, avg=68.13, stdev=17.92 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 25], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 49], 00:20:20.932 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.932 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 97], 00:20:20.932 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 124], 00:20:20.932 | 99.99th=[ 132] 00:20:20.932 bw ( KiB/s): min= 776, max= 1200, per=4.18%, avg=934.80, stdev=98.13, samples=20 00:20:20.932 iops : min= 194, max= 300, avg=233.70, stdev=24.53, samples=20 00:20:20.932 lat (msec) : 20=0.59%, 50=21.33%, 100=74.46%, 250=3.61% 00:20:20.932 cpu : usr=31.05%, sys=1.89%, ctx=882, majf=0, minf=9 00:20:20.932 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename1: (groupid=0, jobs=1): err= 0: pid=83523: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=238, BW=953KiB/s (976kB/s)(9540KiB/10010msec) 00:20:20.932 slat (usec): min=3, max=4029, avg=23.01, stdev=183.46 00:20:20.932 clat (msec): min=20, max=141, avg=67.04, stdev=17.00 00:20:20.932 lat (msec): min=20, max=141, avg=67.07, stdev=17.01 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 50], 00:20:20.932 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:20:20.932 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 99], 00:20:20.932 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 142], 00:20:20.932 | 99.99th=[ 142] 00:20:20.932 bw ( KiB/s): min= 824, max= 1080, per=4.25%, avg=950.15, stdev=72.80, samples=20 00:20:20.932 iops : min= 206, max= 270, avg=237.50, stdev=18.13, samples=20 00:20:20.932 lat (msec) : 50=20.63%, 100=75.30%, 250=4.07% 00:20:20.932 cpu : usr=40.92%, sys=2.56%, ctx=1155, majf=0, minf=9 00:20:20.932 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename1: (groupid=0, jobs=1): err= 0: pid=83524: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=225, BW=901KiB/s (922kB/s)(9056KiB/10053msec) 00:20:20.932 slat (usec): min=4, max=4026, avg=18.86, stdev=145.89 00:20:20.932 clat (usec): min=1239, max=154964, avg=70905.77, stdev=23715.61 00:20:20.932 lat (usec): min=1250, max=154988, avg=70924.63, stdev=23718.79 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 5], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 56], 00:20:20.932 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:20:20.932 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 97], 95.00th=[ 109], 00:20:20.932 | 99.00th=[ 131], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:20:20.932 | 99.99th=[ 155] 00:20:20.932 bw ( KiB/s): min= 624, max= 1795, per=4.02%, avg=899.35, stdev=234.35, samples=20 00:20:20.932 iops : min= 156, max= 448, avg=224.80, stdev=58.44, samples=20 00:20:20.932 lat (msec) : 2=0.09%, 4=0.75%, 10=3.40%, 50=11.04%, 100=77.08% 00:20:20.932 lat (msec) : 250=7.64% 00:20:20.932 cpu : usr=43.75%, sys=2.71%, ctx=1467, majf=0, minf=10 00:20:20.932 IO depths : 1=0.1%, 2=2.8%, 4=11.5%, 8=70.7%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=90.7%, 8=6.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename1: (groupid=0, jobs=1): err= 0: pid=83525: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=235, BW=944KiB/s (966kB/s)(9440KiB/10004msec) 00:20:20.932 slat (usec): min=4, max=8026, avg=16.42, stdev=165.01 00:20:20.932 clat (msec): min=2, max=152, avg=67.74, stdev=20.12 00:20:20.932 lat (msec): min=2, max=152, avg=67.75, stdev=20.12 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 5], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 49], 00:20:20.932 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.932 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 105], 00:20:20.932 | 99.00th=[ 120], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 153], 00:20:20.932 | 99.99th=[ 153] 00:20:20.932 bw ( KiB/s): min= 640, max= 1024, per=4.10%, avg=917.11, stdev=97.68, samples=19 00:20:20.932 iops : min= 160, max= 256, avg=229.26, stdev=24.44, samples=19 00:20:20.932 lat (msec) : 4=0.72%, 10=1.19%, 50=21.91%, 100=71.14%, 250=5.04% 00:20:20.932 cpu : usr=31.44%, sys=1.82%, ctx=899, majf=0, minf=9 00:20:20.932 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.932 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.932 filename1: (groupid=0, jobs=1): err= 0: pid=83526: Mon Jul 15 13:02:35 2024 00:20:20.932 read: IOPS=234, BW=936KiB/s (959kB/s)(9368KiB/10006msec) 00:20:20.932 slat (usec): min=4, max=8027, avg=21.12, stdev=202.82 00:20:20.932 clat (msec): min=7, max=135, avg=68.25, stdev=17.80 00:20:20.932 lat (msec): min=7, max=135, avg=68.28, stdev=17.79 00:20:20.932 clat percentiles (msec): 00:20:20.932 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:20:20.932 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:20:20.932 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 101], 00:20:20.932 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 132], 99.95th=[ 136], 00:20:20.932 | 99.99th=[ 136] 00:20:20.932 bw ( KiB/s): min= 640, max= 1024, per=4.13%, avg=924.21, stdev=98.72, samples=19 00:20:20.932 iops : min= 160, max= 256, avg=231.05, stdev=24.68, samples=19 00:20:20.932 lat (msec) : 10=0.13%, 20=0.26%, 50=21.01%, 100=73.78%, 250=4.82% 00:20:20.932 cpu : usr=37.07%, sys=2.26%, ctx=1057, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename1: (groupid=0, jobs=1): err= 0: pid=83527: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=238, BW=953KiB/s (976kB/s)(9568KiB/10037msec) 00:20:20.933 slat (usec): min=5, max=4028, avg=16.37, stdev=82.20 00:20:20.933 clat (msec): min=19, max=122, avg=67.00, stdev=17.70 00:20:20.933 lat (msec): min=19, max=122, avg=67.02, stdev=17.70 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 49], 00:20:20.933 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:20:20.933 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 99], 00:20:20.933 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 117], 99.95th=[ 117], 00:20:20.933 | 99.99th=[ 124] 00:20:20.933 bw ( KiB/s): min= 800, max= 1125, per=4.25%, avg=950.25, stdev=77.36, samples=20 00:20:20.933 iops : min= 200, max= 281, avg=237.55, stdev=19.31, samples=20 00:20:20.933 lat (msec) : 20=0.13%, 50=22.99%, 100=72.49%, 250=4.39% 00:20:20.933 cpu : usr=40.58%, sys=2.35%, ctx=1481, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename1: (groupid=0, jobs=1): err= 0: pid=83528: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=241, BW=966KiB/s (990kB/s)(9668KiB/10004msec) 00:20:20.933 slat (usec): min=7, max=8031, avg=20.36, stdev=230.54 00:20:20.933 clat (msec): min=2, max=150, avg=66.13, stdev=19.45 00:20:20.933 lat (msec): min=2, max=150, avg=66.15, stdev=19.45 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 5], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:20:20.933 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:20:20.933 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 96], 00:20:20.933 | 99.00th=[ 114], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 150], 00:20:20.933 | 99.99th=[ 150] 00:20:20.933 bw ( KiB/s): min= 656, max= 1024, per=4.21%, avg=941.05, stdev=93.46, samples=19 00:20:20.933 iops : min= 164, max= 256, avg=235.26, stdev=23.36, samples=19 00:20:20.933 lat (msec) : 4=0.66%, 10=1.16%, 20=0.29%, 50=22.96%, 100=71.08% 00:20:20.933 lat (msec) : 250=3.85% 00:20:20.933 cpu : usr=36.32%, sys=2.09%, ctx=1027, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename2: (groupid=0, jobs=1): err= 0: pid=83529: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=233, BW=934KiB/s (956kB/s)(9380KiB/10048msec) 00:20:20.933 slat (usec): min=3, max=8029, avg=21.88, stdev=248.25 00:20:20.933 clat (msec): min=4, max=145, avg=68.43, stdev=20.31 00:20:20.933 lat (msec): min=4, max=145, avg=68.45, stdev=20.32 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 5], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 53], 00:20:20.933 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:20:20.933 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 95], 95.00th=[ 102], 00:20:20.933 | 99.00th=[ 125], 99.50th=[ 133], 99.90th=[ 133], 99.95th=[ 146], 00:20:20.933 | 99.99th=[ 146] 00:20:20.933 bw ( KiB/s): min= 752, max= 1520, per=4.17%, avg=931.15, stdev=155.08, samples=20 00:20:20.933 iops : min= 188, max= 380, avg=232.75, stdev=38.75, samples=20 00:20:20.933 lat (msec) : 10=2.64%, 20=0.60%, 50=14.33%, 100=75.52%, 250=6.91% 00:20:20.933 cpu : usr=40.46%, sys=2.67%, ctx=1311, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename2: (groupid=0, jobs=1): err= 0: pid=83530: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=227, BW=912KiB/s (934kB/s)(9152KiB/10037msec) 00:20:20.933 slat (usec): min=7, max=8027, avg=24.75, stdev=235.42 00:20:20.933 clat (msec): min=33, max=143, avg=69.96, stdev=18.54 00:20:20.933 lat (msec): min=33, max=143, avg=69.99, stdev=18.55 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:20:20.933 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:20:20.933 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 106], 00:20:20.933 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:20.933 | 99.99th=[ 144] 00:20:20.933 bw ( KiB/s): min= 640, max= 1024, per=4.06%, avg=908.80, stdev=99.93, samples=20 00:20:20.933 iops : min= 160, max= 256, avg=227.20, stdev=24.98, samples=20 00:20:20.933 lat (msec) : 50=15.60%, 100=77.84%, 250=6.56% 00:20:20.933 cpu : usr=42.40%, sys=2.64%, ctx=1668, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=75.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename2: (groupid=0, jobs=1): err= 0: pid=83531: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=234, BW=939KiB/s (961kB/s)(9404KiB/10017msec) 00:20:20.933 slat (usec): min=8, max=8025, avg=23.97, stdev=286.10 00:20:20.933 clat (msec): min=20, max=152, avg=68.05, stdev=17.70 00:20:20.933 lat (msec): min=20, max=152, avg=68.08, stdev=17.70 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 48], 00:20:20.933 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.933 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 96], 00:20:20.933 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 132], 99.95th=[ 153], 00:20:20.933 | 99.99th=[ 153] 00:20:20.933 bw ( KiB/s): min= 784, max= 1024, per=4.17%, avg=933.60, stdev=64.79, samples=20 00:20:20.933 iops : min= 196, max= 256, avg=233.35, stdev=16.23, samples=20 00:20:20.933 lat (msec) : 50=24.37%, 100=72.05%, 250=3.57% 00:20:20.933 cpu : usr=31.34%, sys=1.88%, ctx=874, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename2: (groupid=0, jobs=1): err= 0: pid=83532: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=236, BW=948KiB/s (971kB/s)(9492KiB/10014msec) 00:20:20.933 slat (usec): min=4, max=8030, avg=27.57, stdev=328.79 00:20:20.933 clat (msec): min=17, max=151, avg=67.39, stdev=18.71 00:20:20.933 lat (msec): min=17, max=151, avg=67.42, stdev=18.72 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 48], 00:20:20.933 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:20:20.933 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 106], 00:20:20.933 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 153], 00:20:20.933 | 99.99th=[ 153] 00:20:20.933 bw ( KiB/s): min= 640, max= 1048, per=4.21%, avg=942.70, stdev=88.20, samples=20 00:20:20.933 iops : min= 160, max= 262, avg=235.65, stdev=22.07, samples=20 00:20:20.933 lat (msec) : 20=0.29%, 50=24.40%, 100=69.95%, 250=5.35% 00:20:20.933 cpu : usr=32.93%, sys=1.96%, ctx=891, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename2: (groupid=0, jobs=1): err= 0: pid=83533: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=230, BW=923KiB/s (945kB/s)(9244KiB/10012msec) 00:20:20.933 slat (usec): min=4, max=8029, avg=23.66, stdev=288.66 00:20:20.933 clat (msec): min=20, max=143, avg=69.19, stdev=17.97 00:20:20.933 lat (msec): min=20, max=143, avg=69.21, stdev=17.97 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:20:20.933 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.933 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 97], 00:20:20.933 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 144], 00:20:20.933 | 99.99th=[ 144] 00:20:20.933 bw ( KiB/s): min= 656, max= 1072, per=4.12%, avg=920.45, stdev=103.26, samples=20 00:20:20.933 iops : min= 164, max= 268, avg=230.10, stdev=25.83, samples=20 00:20:20.933 lat (msec) : 50=22.41%, 100=73.43%, 250=4.15% 00:20:20.933 cpu : usr=31.15%, sys=2.05%, ctx=892, majf=0, minf=9 00:20:20.933 IO depths : 1=0.1%, 2=1.7%, 4=6.7%, 8=76.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:20.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.933 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.933 filename2: (groupid=0, jobs=1): err= 0: pid=83534: Mon Jul 15 13:02:35 2024 00:20:20.933 read: IOPS=230, BW=924KiB/s (946kB/s)(9268KiB/10032msec) 00:20:20.933 slat (usec): min=3, max=8023, avg=20.76, stdev=235.30 00:20:20.933 clat (msec): min=35, max=143, avg=69.13, stdev=17.79 00:20:20.933 lat (msec): min=35, max=143, avg=69.15, stdev=17.79 00:20:20.933 clat percentiles (msec): 00:20:20.933 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:20:20.933 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.933 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 106], 00:20:20.933 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:20.933 | 99.99th=[ 144] 00:20:20.933 bw ( KiB/s): min= 656, max= 1024, per=4.12%, avg=920.30, stdev=85.54, samples=20 00:20:20.933 iops : min= 164, max= 256, avg=230.05, stdev=21.38, samples=20 00:20:20.933 lat (msec) : 50=19.98%, 100=74.62%, 250=5.39% 00:20:20.933 cpu : usr=31.24%, sys=1.71%, ctx=875, majf=0, minf=9 00:20:20.934 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:20.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.934 complete : 0=0.0%, 4=88.5%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.934 issued rwts: total=2317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.934 filename2: (groupid=0, jobs=1): err= 0: pid=83535: Mon Jul 15 13:02:35 2024 00:20:20.934 read: IOPS=227, BW=909KiB/s (931kB/s)(9132KiB/10041msec) 00:20:20.934 slat (usec): min=6, max=8031, avg=21.55, stdev=205.79 00:20:20.934 clat (msec): min=17, max=124, avg=70.21, stdev=17.52 00:20:20.934 lat (msec): min=17, max=124, avg=70.23, stdev=17.52 00:20:20.934 clat percentiles (msec): 00:20:20.934 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:20:20.934 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:20:20.934 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 95], 95.00th=[ 101], 00:20:20.934 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 120], 99.95th=[ 126], 00:20:20.934 | 99.99th=[ 126] 00:20:20.934 bw ( KiB/s): min= 672, max= 1024, per=4.05%, avg=906.80, stdev=99.15, samples=20 00:20:20.934 iops : min= 168, max= 256, avg=226.70, stdev=24.79, samples=20 00:20:20.934 lat (msec) : 20=0.61%, 50=16.12%, 100=78.71%, 250=4.56% 00:20:20.934 cpu : usr=43.99%, sys=2.64%, ctx=1340, majf=0, minf=10 00:20:20.934 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:20.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.934 complete : 0=0.0%, 4=89.2%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.934 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.934 filename2: (groupid=0, jobs=1): err= 0: pid=83536: Mon Jul 15 13:02:35 2024 00:20:20.934 read: IOPS=219, BW=878KiB/s (899kB/s)(8788KiB/10009msec) 00:20:20.934 slat (usec): min=4, max=4197, avg=19.39, stdev=144.12 00:20:20.934 clat (msec): min=18, max=140, avg=72.74, stdev=17.80 00:20:20.934 lat (msec): min=18, max=140, avg=72.76, stdev=17.80 00:20:20.934 clat percentiles (msec): 00:20:20.934 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 59], 00:20:20.934 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:20:20.934 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 107], 00:20:20.934 | 99.00th=[ 118], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 142], 00:20:20.934 | 99.99th=[ 142] 00:20:20.934 bw ( KiB/s): min= 640, max= 1072, per=3.91%, avg=874.80, stdev=114.95, samples=20 00:20:20.934 iops : min= 160, max= 268, avg=218.70, stdev=28.74, samples=20 00:20:20.934 lat (msec) : 20=0.27%, 50=11.38%, 100=80.84%, 250=7.51% 00:20:20.934 cpu : usr=42.68%, sys=2.16%, ctx=1269, majf=0, minf=9 00:20:20.934 IO depths : 1=0.1%, 2=3.6%, 4=14.2%, 8=68.3%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:20.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.934 complete : 0=0.0%, 4=91.0%, 8=5.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.934 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.934 00:20:20.934 Run status group 0 (all jobs): 00:20:20.934 READ: bw=21.8MiB/s (22.9MB/s), 878KiB/s-975KiB/s (899kB/s-999kB/s), io=220MiB (230MB), run=10004-10057msec 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 bdev_null0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 [2024-07-15 13:02:35.325630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 bdev_null1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:20.934 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:20.934 { 00:20:20.934 "params": { 00:20:20.934 "name": "Nvme$subsystem", 00:20:20.934 "trtype": "$TEST_TRANSPORT", 00:20:20.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.935 "adrfam": "ipv4", 00:20:20.935 "trsvcid": "$NVMF_PORT", 00:20:20.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.935 "hdgst": ${hdgst:-false}, 00:20:20.935 "ddgst": ${ddgst:-false} 00:20:20.935 }, 00:20:20.935 "method": "bdev_nvme_attach_controller" 00:20:20.935 } 00:20:20.935 EOF 00:20:20.935 )") 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:20.935 { 00:20:20.935 "params": { 00:20:20.935 "name": "Nvme$subsystem", 00:20:20.935 "trtype": "$TEST_TRANSPORT", 00:20:20.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.935 "adrfam": "ipv4", 00:20:20.935 "trsvcid": "$NVMF_PORT", 00:20:20.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.935 "hdgst": ${hdgst:-false}, 00:20:20.935 "ddgst": ${ddgst:-false} 00:20:20.935 }, 00:20:20.935 "method": "bdev_nvme_attach_controller" 00:20:20.935 } 00:20:20.935 EOF 00:20:20.935 )") 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:20.935 "params": { 00:20:20.935 "name": "Nvme0", 00:20:20.935 "trtype": "tcp", 00:20:20.935 "traddr": "10.0.0.2", 00:20:20.935 "adrfam": "ipv4", 00:20:20.935 "trsvcid": "4420", 00:20:20.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:20.935 "hdgst": false, 00:20:20.935 "ddgst": false 00:20:20.935 }, 00:20:20.935 "method": "bdev_nvme_attach_controller" 00:20:20.935 },{ 00:20:20.935 "params": { 00:20:20.935 "name": "Nvme1", 00:20:20.935 "trtype": "tcp", 00:20:20.935 "traddr": "10.0.0.2", 00:20:20.935 "adrfam": "ipv4", 00:20:20.935 "trsvcid": "4420", 00:20:20.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.935 "hdgst": false, 00:20:20.935 "ddgst": false 00:20:20.935 }, 00:20:20.935 "method": "bdev_nvme_attach_controller" 00:20:20.935 }' 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:20.935 13:02:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.935 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:20.935 ... 00:20:20.935 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:20.935 ... 00:20:20.935 fio-3.35 00:20:20.935 Starting 4 threads 00:20:25.143 00:20:25.143 filename0: (groupid=0, jobs=1): err= 0: pid=83681: Mon Jul 15 13:02:41 2024 00:20:25.143 read: IOPS=2175, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5003msec) 00:20:25.143 slat (nsec): min=7152, max=58258, avg=12889.97, stdev=4689.89 00:20:25.143 clat (usec): min=1245, max=6546, avg=3633.60, stdev=570.64 00:20:25.143 lat (usec): min=1253, max=6565, avg=3646.49, stdev=571.41 00:20:25.143 clat percentiles (usec): 00:20:25.143 | 1.00th=[ 1909], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 3326], 00:20:25.143 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:20:25.143 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4178], 95.00th=[ 4621], 00:20:25.143 | 99.00th=[ 4817], 99.50th=[ 4817], 99.90th=[ 4948], 99.95th=[ 5866], 00:20:25.143 | 99.99th=[ 5932] 00:20:25.143 bw ( KiB/s): min=16545, max=18688, per=25.85%, avg=17381.44, stdev=720.52, samples=9 00:20:25.143 iops : min= 2068, max= 2336, avg=2172.67, stdev=90.08, samples=9 00:20:25.143 lat (msec) : 2=1.16%, 4=86.27%, 10=12.57% 00:20:25.143 cpu : usr=91.26%, sys=7.88%, ctx=3, majf=0, minf=0 00:20:25.143 IO depths : 1=0.1%, 2=15.5%, 4=57.6%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 issued rwts: total=10885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:25.143 filename0: (groupid=0, jobs=1): err= 0: pid=83682: Mon Jul 15 13:02:41 2024 00:20:25.143 read: IOPS=2176, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5002msec) 00:20:25.143 slat (nsec): min=7243, max=57880, avg=15461.68, stdev=4405.37 00:20:25.143 clat (usec): min=1227, max=6517, avg=3623.25, stdev=573.15 00:20:25.143 lat (usec): min=1240, max=6531, avg=3638.72, stdev=573.10 00:20:25.143 clat percentiles (usec): 00:20:25.143 | 1.00th=[ 1876], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 3294], 00:20:25.143 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:20:25.143 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4178], 95.00th=[ 4621], 00:20:25.143 | 99.00th=[ 4817], 99.50th=[ 4817], 99.90th=[ 4948], 99.95th=[ 5800], 00:20:25.143 | 99.99th=[ 5932] 00:20:25.143 bw ( KiB/s): min=16479, max=18720, per=25.84%, avg=17374.11, stdev=743.35, samples=9 00:20:25.143 iops : min= 2059, max= 2340, avg=2171.67, stdev=93.05, samples=9 00:20:25.143 lat (msec) : 2=1.30%, 4=86.20%, 10=12.49% 00:20:25.143 cpu : usr=92.08%, sys=7.04%, ctx=10, majf=0, minf=10 00:20:25.143 IO depths : 1=0.1%, 2=15.5%, 4=57.6%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 issued rwts: total=10885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:25.143 filename1: (groupid=0, jobs=1): err= 0: pid=83683: Mon Jul 15 13:02:41 2024 00:20:25.143 read: IOPS=1878, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5002msec) 00:20:25.143 slat (usec): min=6, max=178, avg=10.53, stdev= 5.03 00:20:25.143 clat (usec): min=704, max=6829, avg=4216.38, stdev=936.64 00:20:25.143 lat (usec): min=713, max=6858, avg=4226.91, stdev=937.72 00:20:25.143 clat percentiles (usec): 00:20:25.143 | 1.00th=[ 979], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3720], 00:20:25.143 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:20:25.143 | 70.00th=[ 3949], 80.00th=[ 5014], 90.00th=[ 5866], 95.00th=[ 5997], 00:20:25.143 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6325], 99.95th=[ 6587], 00:20:25.143 | 99.99th=[ 6849] 00:20:25.143 bw ( KiB/s): min=11776, max=16768, per=22.37%, avg=15038.22, stdev=1703.24, samples=9 00:20:25.143 iops : min= 1472, max= 2096, avg=1879.78, stdev=212.90, samples=9 00:20:25.143 lat (usec) : 750=0.31%, 1000=0.77% 00:20:25.143 lat (msec) : 2=0.45%, 4=70.39%, 10=28.09% 00:20:25.143 cpu : usr=90.90%, sys=8.02%, ctx=62, majf=0, minf=9 00:20:25.143 IO depths : 1=0.1%, 2=23.5%, 4=50.9%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 issued rwts: total=9395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:25.143 filename1: (groupid=0, jobs=1): err= 0: pid=83684: Mon Jul 15 13:02:41 2024 00:20:25.143 read: IOPS=2176, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5004msec) 00:20:25.143 slat (nsec): min=7603, max=62010, avg=15416.94, stdev=4344.26 00:20:25.143 clat (usec): min=1230, max=6907, avg=3622.89, stdev=574.05 00:20:25.143 lat (usec): min=1243, max=6921, avg=3638.31, stdev=574.20 00:20:25.143 clat percentiles (usec): 00:20:25.143 | 1.00th=[ 1893], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 3294], 00:20:25.143 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:20:25.143 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4178], 95.00th=[ 4621], 00:20:25.143 | 99.00th=[ 4817], 99.50th=[ 4817], 99.90th=[ 4948], 99.95th=[ 5866], 00:20:25.143 | 99.99th=[ 6521] 00:20:25.143 bw ( KiB/s): min=16640, max=18688, per=25.87%, avg=17392.00, stdev=707.31, samples=9 00:20:25.143 iops : min= 2080, max= 2336, avg=2174.00, stdev=88.41, samples=9 00:20:25.143 lat (msec) : 2=1.30%, 4=86.23%, 10=12.46% 00:20:25.143 cpu : usr=91.09%, sys=8.08%, ctx=5, majf=0, minf=0 00:20:25.143 IO depths : 1=0.1%, 2=15.5%, 4=57.6%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.143 issued rwts: total=10889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:25.143 00:20:25.143 Run status group 0 (all jobs): 00:20:25.143 READ: bw=65.7MiB/s (68.8MB/s), 14.7MiB/s-17.0MiB/s (15.4MB/s-17.8MB/s), io=329MiB (345MB), run=5002-5004msec 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.403 00:20:25.403 real 0m23.389s 00:20:25.403 user 2m2.748s 00:20:25.403 sys 0m9.052s 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.403 13:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.403 ************************************ 00:20:25.403 END TEST fio_dif_rand_params 00:20:25.403 ************************************ 00:20:25.403 13:02:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:25.403 13:02:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:25.403 13:02:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:25.403 13:02:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.403 13:02:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:25.403 ************************************ 00:20:25.403 START TEST fio_dif_digest 00:20:25.403 ************************************ 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:25.404 bdev_null0 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:25.404 [2024-07-15 13:02:41.450284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.404 { 00:20:25.404 "params": { 00:20:25.404 "name": "Nvme$subsystem", 00:20:25.404 "trtype": "$TEST_TRANSPORT", 00:20:25.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.404 "adrfam": "ipv4", 00:20:25.404 "trsvcid": "$NVMF_PORT", 00:20:25.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.404 "hdgst": ${hdgst:-false}, 00:20:25.404 "ddgst": ${ddgst:-false} 00:20:25.404 }, 00:20:25.404 "method": "bdev_nvme_attach_controller" 00:20:25.404 } 00:20:25.404 EOF 00:20:25.404 )") 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:25.404 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.663 "params": { 00:20:25.663 "name": "Nvme0", 00:20:25.663 "trtype": "tcp", 00:20:25.663 "traddr": "10.0.0.2", 00:20:25.663 "adrfam": "ipv4", 00:20:25.663 "trsvcid": "4420", 00:20:25.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:25.663 "hdgst": true, 00:20:25.663 "ddgst": true 00:20:25.663 }, 00:20:25.663 "method": "bdev_nvme_attach_controller" 00:20:25.663 }' 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.663 13:02:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.663 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:25.663 ... 00:20:25.663 fio-3.35 00:20:25.663 Starting 3 threads 00:20:37.860 00:20:37.860 filename0: (groupid=0, jobs=1): err= 0: pid=83784: Mon Jul 15 13:02:52 2024 00:20:37.860 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10010msec) 00:20:37.860 slat (nsec): min=7244, max=37302, avg=9993.12, stdev=3595.05 00:20:37.860 clat (usec): min=11427, max=14327, avg=12825.17, stdev=299.09 00:20:37.860 lat (usec): min=11435, max=14350, avg=12835.16, stdev=299.50 00:20:37.860 clat percentiles (usec): 00:20:37.860 | 1.00th=[12387], 5.00th=[12518], 10.00th=[12518], 20.00th=[12649], 00:20:37.860 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:20:37.860 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:20:37.860 | 99.00th=[13566], 99.50th=[13566], 99.90th=[14353], 99.95th=[14353], 00:20:37.860 | 99.99th=[14353] 00:20:37.860 bw ( KiB/s): min=29184, max=30658, per=33.35%, avg=29895.74, stdev=303.09, samples=19 00:20:37.860 iops : min= 228, max= 239, avg=233.47, stdev= 2.32, samples=19 00:20:37.860 lat (msec) : 20=100.00% 00:20:37.860 cpu : usr=90.76%, sys=8.68%, ctx=18, majf=0, minf=0 00:20:37.860 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.860 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.860 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:37.860 filename0: (groupid=0, jobs=1): err= 0: pid=83785: Mon Jul 15 13:02:52 2024 00:20:37.860 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10011msec) 00:20:37.860 slat (nsec): min=7262, max=36091, avg=10253.64, stdev=3858.90 00:20:37.861 clat (usec): min=12397, max=13780, avg=12825.91, stdev=292.68 00:20:37.861 lat (usec): min=12404, max=13805, avg=12836.16, stdev=293.18 00:20:37.861 clat percentiles (usec): 00:20:37.861 | 1.00th=[12387], 5.00th=[12518], 10.00th=[12518], 20.00th=[12649], 00:20:37.861 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:20:37.861 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:20:37.861 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13698], 99.95th=[13829], 00:20:37.861 | 99.99th=[13829] 00:20:37.861 bw ( KiB/s): min=29184, max=30476, per=33.34%, avg=29886.16, stdev=279.63, samples=19 00:20:37.861 iops : min= 228, max= 238, avg=233.42, stdev= 2.19, samples=19 00:20:37.861 lat (msec) : 20=100.00% 00:20:37.861 cpu : usr=90.49%, sys=8.95%, ctx=19, majf=0, minf=0 00:20:37.861 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.861 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.861 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:37.861 filename0: (groupid=0, jobs=1): err= 0: pid=83786: Mon Jul 15 13:02:52 2024 00:20:37.861 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10010msec) 00:20:37.861 slat (nsec): min=7254, max=40036, avg=9906.40, stdev=3731.68 00:20:37.861 clat (usec): min=9832, max=15365, avg=12825.07, stdev=323.04 00:20:37.861 lat (usec): min=9841, max=15390, avg=12834.97, stdev=323.38 00:20:37.861 clat percentiles (usec): 00:20:37.861 | 1.00th=[12518], 5.00th=[12518], 10.00th=[12518], 20.00th=[12649], 00:20:37.861 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:20:37.861 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:20:37.861 | 99.00th=[13566], 99.50th=[13566], 99.90th=[15401], 99.95th=[15401], 00:20:37.861 | 99.99th=[15401] 00:20:37.861 bw ( KiB/s): min=29184, max=30720, per=33.35%, avg=29895.74, stdev=396.74, samples=19 00:20:37.861 iops : min= 228, max= 240, avg=233.47, stdev= 3.06, samples=19 00:20:37.861 lat (msec) : 10=0.13%, 20=99.87% 00:20:37.861 cpu : usr=90.00%, sys=9.44%, ctx=10, majf=0, minf=0 00:20:37.861 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.861 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.861 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:37.861 00:20:37.861 Run status group 0 (all jobs): 00:20:37.861 READ: bw=87.5MiB/s (91.8MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=876MiB (919MB), run=10010-10011msec 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.861 00:20:37.861 real 0m10.980s 00:20:37.861 user 0m27.806s 00:20:37.861 sys 0m2.953s 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.861 13:02:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:37.861 ************************************ 00:20:37.861 END TEST fio_dif_digest 00:20:37.861 ************************************ 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:37.861 13:02:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:37.861 13:02:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:37.861 rmmod nvme_tcp 00:20:37.861 rmmod nvme_fabrics 00:20:37.861 rmmod nvme_keyring 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83028 ']' 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83028 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83028 ']' 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83028 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83028 00:20:37.861 killing process with pid 83028 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83028' 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83028 00:20:37.861 13:02:52 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83028 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:37.861 13:02:52 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:37.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.861 Waiting for block devices as requested 00:20:37.861 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:37.861 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:37.861 13:02:53 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:37.861 13:02:53 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:37.861 13:02:53 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.861 13:02:53 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:37.861 13:02:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.861 13:02:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:37.861 13:02:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.861 13:02:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:37.861 00:20:37.861 real 0m59.581s 00:20:37.861 user 3m46.929s 00:20:37.861 sys 0m20.404s 00:20:37.861 13:02:53 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.861 13:02:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:37.861 ************************************ 00:20:37.861 END TEST nvmf_dif 00:20:37.861 ************************************ 00:20:37.861 13:02:53 -- common/autotest_common.sh@1142 -- # return 0 00:20:37.861 13:02:53 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:37.861 13:02:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.861 13:02:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.861 13:02:53 -- common/autotest_common.sh@10 -- # set +x 00:20:37.861 ************************************ 00:20:37.861 START TEST nvmf_abort_qd_sizes 00:20:37.861 ************************************ 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:37.861 * Looking for test storage... 00:20:37.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.861 13:02:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:37.862 Cannot find device "nvmf_tgt_br" 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.862 Cannot find device "nvmf_tgt_br2" 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:37.862 Cannot find device "nvmf_tgt_br" 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:37.862 Cannot find device "nvmf_tgt_br2" 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:37.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:20:37.862 00:20:37.862 --- 10.0.0.2 ping statistics --- 00:20:37.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.862 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:37.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:20:37.862 00:20:37.862 --- 10.0.0.3 ping statistics --- 00:20:37.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.862 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:37.862 00:20:37.862 --- 10.0.0.1 ping statistics --- 00:20:37.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.862 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:37.862 13:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:38.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.795 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.795 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84380 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84380 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84380 ']' 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.795 13:02:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:38.795 [2024-07-15 13:02:54.811433] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:38.795 [2024-07-15 13:02:54.811510] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.052 [2024-07-15 13:02:54.954016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.052 [2024-07-15 13:02:55.063105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.052 [2024-07-15 13:02:55.063345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.052 [2024-07-15 13:02:55.063587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.052 [2024-07-15 13:02:55.063732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.052 [2024-07-15 13:02:55.063914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.052 [2024-07-15 13:02:55.064105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.052 [2024-07-15 13:02:55.066399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.052 [2024-07-15 13:02:55.066520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.052 [2024-07-15 13:02:55.066528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.309 [2024-07-15 13:02:55.127320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.875 13:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:39.875 ************************************ 00:20:39.875 START TEST spdk_target_abort 00:20:39.875 ************************************ 00:20:39.875 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:20:39.875 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:39.875 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:39.875 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.875 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 spdk_targetn1 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 [2024-07-15 13:02:55.947758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 [2024-07-15 13:02:55.979903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:40.134 13:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:43.414 Initializing NVMe Controllers 00:20:43.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:43.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:43.414 Initialization complete. Launching workers. 00:20:43.414 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10429, failed: 0 00:20:43.414 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1038, failed to submit 9391 00:20:43.414 success 886, unsuccess 152, failed 0 00:20:43.414 13:02:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:43.414 13:02:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:46.729 Initializing NVMe Controllers 00:20:46.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:46.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:46.729 Initialization complete. Launching workers. 00:20:46.729 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 00:20:46.729 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1149, failed to submit 7827 00:20:46.729 success 386, unsuccess 763, failed 0 00:20:46.729 13:03:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:46.729 13:03:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:50.007 Initializing NVMe Controllers 00:20:50.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:50.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:50.007 Initialization complete. Launching workers. 00:20:50.007 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32562, failed: 0 00:20:50.007 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2316, failed to submit 30246 00:20:50.007 success 427, unsuccess 1889, failed 0 00:20:50.007 13:03:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:50.007 13:03:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.007 13:03:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:50.007 13:03:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.007 13:03:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:50.007 13:03:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.007 13:03:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84380 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84380 ']' 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84380 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84380 00:20:50.574 killing process with pid 84380 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84380' 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84380 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84380 00:20:50.574 ************************************ 00:20:50.574 END TEST spdk_target_abort 00:20:50.574 ************************************ 00:20:50.574 00:20:50.574 real 0m10.748s 00:20:50.574 user 0m43.098s 00:20:50.574 sys 0m2.221s 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.574 13:03:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:50.832 13:03:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:50.832 13:03:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:50.832 13:03:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:50.833 13:03:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.833 13:03:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:50.833 ************************************ 00:20:50.833 START TEST kernel_target_abort 00:20:50.833 ************************************ 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:50.833 13:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:51.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:51.090 Waiting for block devices as requested 00:20:51.090 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:51.349 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:51.349 No valid GPT data, bailing 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:51.349 No valid GPT data, bailing 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:51.349 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:51.607 No valid GPT data, bailing 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:51.607 No valid GPT data, bailing 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:51.607 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 --hostid=d239ea4f-47fe-42e0-b535-ac0b7a58df88 -a 10.0.0.1 -t tcp -s 4420 00:20:51.607 00:20:51.607 Discovery Log Number of Records 2, Generation counter 2 00:20:51.607 =====Discovery Log Entry 0====== 00:20:51.607 trtype: tcp 00:20:51.607 adrfam: ipv4 00:20:51.607 subtype: current discovery subsystem 00:20:51.607 treq: not specified, sq flow control disable supported 00:20:51.607 portid: 1 00:20:51.607 trsvcid: 4420 00:20:51.607 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:51.607 traddr: 10.0.0.1 00:20:51.607 eflags: none 00:20:51.607 sectype: none 00:20:51.607 =====Discovery Log Entry 1====== 00:20:51.607 trtype: tcp 00:20:51.607 adrfam: ipv4 00:20:51.607 subtype: nvme subsystem 00:20:51.607 treq: not specified, sq flow control disable supported 00:20:51.608 portid: 1 00:20:51.608 trsvcid: 4420 00:20:51.608 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:51.608 traddr: 10.0.0.1 00:20:51.608 eflags: none 00:20:51.608 sectype: none 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:51.608 13:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:54.896 Initializing NVMe Controllers 00:20:54.896 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:54.896 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:54.896 Initialization complete. Launching workers. 00:20:54.896 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33068, failed: 0 00:20:54.896 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33068, failed to submit 0 00:20:54.896 success 0, unsuccess 33068, failed 0 00:20:54.896 13:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:54.896 13:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:58.178 Initializing NVMe Controllers 00:20:58.178 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:58.178 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:58.178 Initialization complete. Launching workers. 00:20:58.178 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68655, failed: 0 00:20:58.178 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29770, failed to submit 38885 00:20:58.178 success 0, unsuccess 29770, failed 0 00:20:58.178 13:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:58.178 13:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:01.462 Initializing NVMe Controllers 00:21:01.462 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:01.462 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:01.462 Initialization complete. Launching workers. 00:21:01.462 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82224, failed: 0 00:21:01.462 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20526, failed to submit 61698 00:21:01.462 success 0, unsuccess 20526, failed 0 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:01.462 13:03:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:02.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.932 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.932 00:21:03.932 real 0m13.010s 00:21:03.932 user 0m5.969s 00:21:03.932 sys 0m4.485s 00:21:03.932 13:03:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.932 13:03:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:03.932 ************************************ 00:21:03.932 END TEST kernel_target_abort 00:21:03.932 ************************************ 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.932 rmmod nvme_tcp 00:21:03.932 rmmod nvme_fabrics 00:21:03.932 rmmod nvme_keyring 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84380 ']' 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84380 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84380 ']' 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84380 00:21:03.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84380) - No such process 00:21:03.932 Process with pid 84380 is not found 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84380 is not found' 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:03.932 13:03:19 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:04.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:04.191 Waiting for block devices as requested 00:21:04.191 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:04.449 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:04.449 00:21:04.449 real 0m27.016s 00:21:04.449 user 0m50.269s 00:21:04.449 sys 0m8.005s 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:04.449 13:03:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:04.449 ************************************ 00:21:04.449 END TEST nvmf_abort_qd_sizes 00:21:04.449 ************************************ 00:21:04.449 13:03:20 -- common/autotest_common.sh@1142 -- # return 0 00:21:04.449 13:03:20 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:04.449 13:03:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:04.449 13:03:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.449 13:03:20 -- common/autotest_common.sh@10 -- # set +x 00:21:04.449 ************************************ 00:21:04.450 START TEST keyring_file 00:21:04.450 ************************************ 00:21:04.450 13:03:20 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:04.709 * Looking for test storage... 00:21:04.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.709 13:03:20 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.709 13:03:20 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.709 13:03:20 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.709 13:03:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.709 13:03:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.709 13:03:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.709 13:03:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:04.709 13:03:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lHJg70vIN5 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lHJg70vIN5 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lHJg70vIN5 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lHJg70vIN5 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.trSphZJS27 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:04.709 13:03:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.trSphZJS27 00:21:04.709 13:03:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.trSphZJS27 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.trSphZJS27 00:21:04.709 13:03:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=85252 00:21:04.710 13:03:20 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.710 13:03:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85252 00:21:04.710 13:03:20 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85252 ']' 00:21:04.710 13:03:20 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.710 13:03:20 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.710 13:03:20 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.710 13:03:20 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.710 13:03:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:04.968 [2024-07-15 13:03:20.768739] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:04.969 [2024-07-15 13:03:20.768825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85252 ] 00:21:04.969 [2024-07-15 13:03:20.908727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.969 [2024-07-15 13:03:21.017507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.227 [2024-07-15 13:03:21.074957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:05.795 13:03:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:05.795 [2024-07-15 13:03:21.758691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.795 null0 00:21:05.795 [2024-07-15 13:03:21.790633] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.795 [2024-07-15 13:03:21.790854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:05.795 [2024-07-15 13:03:21.798624] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.795 13:03:21 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.795 13:03:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:05.795 [2024-07-15 13:03:21.810627] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:05.795 request: 00:21:05.795 { 00:21:05.795 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.795 "secure_channel": false, 00:21:05.795 "listen_address": { 00:21:05.795 "trtype": "tcp", 00:21:05.795 "traddr": "127.0.0.1", 00:21:05.795 "trsvcid": "4420" 00:21:05.795 }, 00:21:05.795 "method": "nvmf_subsystem_add_listener", 00:21:05.795 "req_id": 1 00:21:05.795 } 00:21:05.795 Got JSON-RPC error response 00:21:05.795 response: 00:21:05.795 { 00:21:05.795 "code": -32602, 00:21:05.795 "message": "Invalid parameters" 00:21:05.795 } 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.796 13:03:21 keyring_file -- keyring/file.sh@46 -- # bperfpid=85269 00:21:05.796 13:03:21 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85269 /var/tmp/bperf.sock 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85269 ']' 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.796 13:03:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:05.796 13:03:21 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:06.055 [2024-07-15 13:03:21.875327] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:06.055 [2024-07-15 13:03:21.875471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85269 ] 00:21:06.055 [2024-07-15 13:03:22.016212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.314 [2024-07-15 13:03:22.127871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.314 [2024-07-15 13:03:22.185228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:06.904 13:03:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.904 13:03:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:06.904 13:03:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:06.904 13:03:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:06.904 13:03:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.trSphZJS27 00:21:06.904 13:03:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.trSphZJS27 00:21:07.163 13:03:23 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:07.163 13:03:23 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:07.163 13:03:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.163 13:03:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.163 13:03:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.420 13:03:23 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.lHJg70vIN5 == \/\t\m\p\/\t\m\p\.\l\H\J\g\7\0\v\I\N\5 ]] 00:21:07.420 13:03:23 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:07.420 13:03:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:07.420 13:03:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.421 13:03:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.421 13:03:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:07.678 13:03:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.trSphZJS27 == \/\t\m\p\/\t\m\p\.\t\r\S\p\h\Z\J\S\2\7 ]] 00:21:07.678 13:03:23 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:07.678 13:03:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:07.678 13:03:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:07.678 13:03:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.678 13:03:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.678 13:03:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.936 13:03:23 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:07.936 13:03:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:07.936 13:03:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:07.936 13:03:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:07.936 13:03:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:07.936 13:03:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.936 13:03:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.194 13:03:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:08.194 13:03:24 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.194 13:03:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.453 [2024-07-15 13:03:24.359368] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.453 nvme0n1 00:21:08.453 13:03:24 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:08.453 13:03:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:08.453 13:03:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.453 13:03:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.453 13:03:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.453 13:03:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.710 13:03:24 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:08.710 13:03:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:08.710 13:03:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.710 13:03:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:08.710 13:03:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.710 13:03:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.710 13:03:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.968 13:03:24 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:08.969 13:03:24 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:08.969 Running I/O for 1 seconds... 00:21:10.350 00:21:10.350 Latency(us) 00:21:10.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.350 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:10.350 nvme0n1 : 1.01 12857.31 50.22 0.00 0.00 9921.33 5093.93 16562.73 00:21:10.350 =================================================================================================================== 00:21:10.350 Total : 12857.31 50.22 0.00 0.00 9921.33 5093.93 16562.73 00:21:10.350 0 00:21:10.350 13:03:26 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:10.350 13:03:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:10.350 13:03:26 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:10.350 13:03:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:10.350 13:03:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.350 13:03:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:10.350 13:03:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.350 13:03:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:10.608 13:03:26 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:10.608 13:03:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:10.608 13:03:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:10.608 13:03:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.608 13:03:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:10.608 13:03:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.608 13:03:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:10.867 13:03:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:10.867 13:03:26 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:10.867 13:03:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:10.867 [2024-07-15 13:03:26.884819] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:10.867 [2024-07-15 13:03:26.885310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78590 (107): Transport endpoint is not connected 00:21:10.867 [2024-07-15 13:03:26.886299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78590 (9): Bad file descriptor 00:21:10.867 [2024-07-15 13:03:26.887297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.867 [2024-07-15 13:03:26.887323] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:10.867 [2024-07-15 13:03:26.887350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.867 request: 00:21:10.867 { 00:21:10.867 "name": "nvme0", 00:21:10.867 "trtype": "tcp", 00:21:10.867 "traddr": "127.0.0.1", 00:21:10.867 "adrfam": "ipv4", 00:21:10.867 "trsvcid": "4420", 00:21:10.867 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.867 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.867 "prchk_reftag": false, 00:21:10.867 "prchk_guard": false, 00:21:10.867 "hdgst": false, 00:21:10.867 "ddgst": false, 00:21:10.867 "psk": "key1", 00:21:10.867 "method": "bdev_nvme_attach_controller", 00:21:10.867 "req_id": 1 00:21:10.867 } 00:21:10.867 Got JSON-RPC error response 00:21:10.867 response: 00:21:10.867 { 00:21:10.867 "code": -5, 00:21:10.867 "message": "Input/output error" 00:21:10.867 } 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.867 13:03:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.867 13:03:26 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:10.867 13:03:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:10.867 13:03:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.867 13:03:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.867 13:03:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:10.867 13:03:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.124 13:03:27 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:11.124 13:03:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:11.124 13:03:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:11.124 13:03:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.124 13:03:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.124 13:03:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.124 13:03:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:11.382 13:03:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:11.382 13:03:27 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:11.383 13:03:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:11.640 13:03:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:11.640 13:03:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:11.898 13:03:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:11.898 13:03:27 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:11.898 13:03:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.155 13:03:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:12.155 13:03:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.lHJg70vIN5 00:21:12.155 13:03:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:12.155 13:03:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:12.155 13:03:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:12.155 13:03:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:12.155 13:03:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:12.155 13:03:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:12.155 13:03:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:12.155 13:03:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:12.155 13:03:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:12.413 [2024-07-15 13:03:28.268746] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lHJg70vIN5': 0100660 00:21:12.413 [2024-07-15 13:03:28.268788] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:12.413 request: 00:21:12.413 { 00:21:12.413 "name": "key0", 00:21:12.413 "path": "/tmp/tmp.lHJg70vIN5", 00:21:12.413 "method": "keyring_file_add_key", 00:21:12.413 "req_id": 1 00:21:12.413 } 00:21:12.413 Got JSON-RPC error response 00:21:12.413 response: 00:21:12.413 { 00:21:12.413 "code": -1, 00:21:12.413 "message": "Operation not permitted" 00:21:12.413 } 00:21:12.413 13:03:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:12.413 13:03:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:12.413 13:03:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:12.413 13:03:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:12.413 13:03:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.lHJg70vIN5 00:21:12.413 13:03:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:12.413 13:03:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lHJg70vIN5 00:21:12.670 13:03:28 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.lHJg70vIN5 00:21:12.670 13:03:28 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:12.670 13:03:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:12.670 13:03:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:12.670 13:03:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:12.670 13:03:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.670 13:03:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.949 13:03:28 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:12.949 13:03:28 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:12.949 13:03:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:12.949 [2024-07-15 13:03:28.980984] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lHJg70vIN5': No such file or directory 00:21:12.949 [2024-07-15 13:03:28.981029] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:12.949 [2024-07-15 13:03:28.981055] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:12.949 [2024-07-15 13:03:28.981063] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:12.949 [2024-07-15 13:03:28.981072] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:12.949 request: 00:21:12.949 { 00:21:12.949 "name": "nvme0", 00:21:12.949 "trtype": "tcp", 00:21:12.949 "traddr": "127.0.0.1", 00:21:12.949 "adrfam": "ipv4", 00:21:12.949 "trsvcid": "4420", 00:21:12.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:12.949 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:12.949 "prchk_reftag": false, 00:21:12.949 "prchk_guard": false, 00:21:12.949 "hdgst": false, 00:21:12.949 "ddgst": false, 00:21:12.949 "psk": "key0", 00:21:12.949 "method": "bdev_nvme_attach_controller", 00:21:12.949 "req_id": 1 00:21:12.949 } 00:21:12.949 Got JSON-RPC error response 00:21:12.949 response: 00:21:12.949 { 00:21:12.949 "code": -19, 00:21:12.949 "message": "No such device" 00:21:12.949 } 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:12.949 13:03:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:12.950 13:03:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:12.950 13:03:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:12.950 13:03:28 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:12.950 13:03:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:13.211 13:03:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FPwa60hEvN 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:13.211 13:03:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:13.211 13:03:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:13.211 13:03:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:13.211 13:03:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:13.211 13:03:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:13.211 13:03:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FPwa60hEvN 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FPwa60hEvN 00:21:13.211 13:03:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.FPwa60hEvN 00:21:13.211 13:03:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FPwa60hEvN 00:21:13.211 13:03:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FPwa60hEvN 00:21:13.469 13:03:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.469 13:03:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.727 nvme0n1 00:21:13.985 13:03:29 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:13.985 13:03:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:13.985 13:03:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.985 13:03:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.985 13:03:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.985 13:03:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:14.242 13:03:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:14.243 13:03:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:14.243 13:03:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:14.500 13:03:30 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:14.500 13:03:30 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.500 13:03:30 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:14.500 13:03:30 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.500 13:03:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:14.757 13:03:30 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:14.757 13:03:30 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:14.757 13:03:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:15.013 13:03:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:15.013 13:03:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.013 13:03:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:15.270 13:03:31 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:15.270 13:03:31 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FPwa60hEvN 00:21:15.270 13:03:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FPwa60hEvN 00:21:15.528 13:03:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.trSphZJS27 00:21:15.528 13:03:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.trSphZJS27 00:21:15.786 13:03:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:15.786 13:03:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:16.045 nvme0n1 00:21:16.045 13:03:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:16.045 13:03:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:16.304 13:03:32 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:16.304 "subsystems": [ 00:21:16.304 { 00:21:16.304 "subsystem": "keyring", 00:21:16.304 "config": [ 00:21:16.304 { 00:21:16.304 "method": "keyring_file_add_key", 00:21:16.304 "params": { 00:21:16.304 "name": "key0", 00:21:16.304 "path": "/tmp/tmp.FPwa60hEvN" 00:21:16.304 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "keyring_file_add_key", 00:21:16.305 "params": { 00:21:16.305 "name": "key1", 00:21:16.305 "path": "/tmp/tmp.trSphZJS27" 00:21:16.305 } 00:21:16.305 } 00:21:16.305 ] 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "subsystem": "iobuf", 00:21:16.305 "config": [ 00:21:16.305 { 00:21:16.305 "method": "iobuf_set_options", 00:21:16.305 "params": { 00:21:16.305 "small_pool_count": 8192, 00:21:16.305 "large_pool_count": 1024, 00:21:16.305 "small_bufsize": 8192, 00:21:16.305 "large_bufsize": 135168 00:21:16.305 } 00:21:16.305 } 00:21:16.305 ] 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "subsystem": "sock", 00:21:16.305 "config": [ 00:21:16.305 { 00:21:16.305 "method": "sock_set_default_impl", 00:21:16.305 "params": { 00:21:16.305 "impl_name": "uring" 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "sock_impl_set_options", 00:21:16.305 "params": { 00:21:16.305 "impl_name": "ssl", 00:21:16.305 "recv_buf_size": 4096, 00:21:16.305 "send_buf_size": 4096, 00:21:16.305 "enable_recv_pipe": true, 00:21:16.305 "enable_quickack": false, 00:21:16.305 "enable_placement_id": 0, 00:21:16.305 "enable_zerocopy_send_server": true, 00:21:16.305 "enable_zerocopy_send_client": false, 00:21:16.305 "zerocopy_threshold": 0, 00:21:16.305 "tls_version": 0, 00:21:16.305 "enable_ktls": false 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "sock_impl_set_options", 00:21:16.305 "params": { 00:21:16.305 "impl_name": "posix", 00:21:16.305 "recv_buf_size": 2097152, 00:21:16.305 "send_buf_size": 2097152, 00:21:16.305 "enable_recv_pipe": true, 00:21:16.305 "enable_quickack": false, 00:21:16.305 "enable_placement_id": 0, 00:21:16.305 "enable_zerocopy_send_server": true, 00:21:16.305 "enable_zerocopy_send_client": false, 00:21:16.305 "zerocopy_threshold": 0, 00:21:16.305 "tls_version": 0, 00:21:16.305 "enable_ktls": false 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "sock_impl_set_options", 00:21:16.305 "params": { 00:21:16.305 "impl_name": "uring", 00:21:16.305 "recv_buf_size": 2097152, 00:21:16.305 "send_buf_size": 2097152, 00:21:16.305 "enable_recv_pipe": true, 00:21:16.305 "enable_quickack": false, 00:21:16.305 "enable_placement_id": 0, 00:21:16.305 "enable_zerocopy_send_server": false, 00:21:16.305 "enable_zerocopy_send_client": false, 00:21:16.305 "zerocopy_threshold": 0, 00:21:16.305 "tls_version": 0, 00:21:16.305 "enable_ktls": false 00:21:16.305 } 00:21:16.305 } 00:21:16.305 ] 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "subsystem": "vmd", 00:21:16.305 "config": [] 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "subsystem": "accel", 00:21:16.305 "config": [ 00:21:16.305 { 00:21:16.305 "method": "accel_set_options", 00:21:16.305 "params": { 00:21:16.305 "small_cache_size": 128, 00:21:16.305 "large_cache_size": 16, 00:21:16.305 "task_count": 2048, 00:21:16.305 "sequence_count": 2048, 00:21:16.305 "buf_count": 2048 00:21:16.305 } 00:21:16.305 } 00:21:16.305 ] 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "subsystem": "bdev", 00:21:16.305 "config": [ 00:21:16.305 { 00:21:16.305 "method": "bdev_set_options", 00:21:16.305 "params": { 00:21:16.305 "bdev_io_pool_size": 65535, 00:21:16.305 "bdev_io_cache_size": 256, 00:21:16.305 "bdev_auto_examine": true, 00:21:16.305 "iobuf_small_cache_size": 128, 00:21:16.305 "iobuf_large_cache_size": 16 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "bdev_raid_set_options", 00:21:16.305 "params": { 00:21:16.305 "process_window_size_kb": 1024 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "bdev_iscsi_set_options", 00:21:16.305 "params": { 00:21:16.305 "timeout_sec": 30 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "bdev_nvme_set_options", 00:21:16.305 "params": { 00:21:16.305 "action_on_timeout": "none", 00:21:16.305 "timeout_us": 0, 00:21:16.305 "timeout_admin_us": 0, 00:21:16.305 "keep_alive_timeout_ms": 10000, 00:21:16.305 "arbitration_burst": 0, 00:21:16.305 "low_priority_weight": 0, 00:21:16.305 "medium_priority_weight": 0, 00:21:16.305 "high_priority_weight": 0, 00:21:16.305 "nvme_adminq_poll_period_us": 10000, 00:21:16.305 "nvme_ioq_poll_period_us": 0, 00:21:16.305 "io_queue_requests": 512, 00:21:16.305 "delay_cmd_submit": true, 00:21:16.305 "transport_retry_count": 4, 00:21:16.305 "bdev_retry_count": 3, 00:21:16.305 "transport_ack_timeout": 0, 00:21:16.305 "ctrlr_loss_timeout_sec": 0, 00:21:16.305 "reconnect_delay_sec": 0, 00:21:16.305 "fast_io_fail_timeout_sec": 0, 00:21:16.305 "disable_auto_failback": false, 00:21:16.305 "generate_uuids": false, 00:21:16.305 "transport_tos": 0, 00:21:16.305 "nvme_error_stat": false, 00:21:16.305 "rdma_srq_size": 0, 00:21:16.305 "io_path_stat": false, 00:21:16.305 "allow_accel_sequence": false, 00:21:16.305 "rdma_max_cq_size": 0, 00:21:16.305 "rdma_cm_event_timeout_ms": 0, 00:21:16.305 "dhchap_digests": [ 00:21:16.305 "sha256", 00:21:16.305 "sha384", 00:21:16.305 "sha512" 00:21:16.305 ], 00:21:16.305 "dhchap_dhgroups": [ 00:21:16.305 "null", 00:21:16.305 "ffdhe2048", 00:21:16.305 "ffdhe3072", 00:21:16.305 "ffdhe4096", 00:21:16.305 "ffdhe6144", 00:21:16.305 "ffdhe8192" 00:21:16.305 ] 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "bdev_nvme_attach_controller", 00:21:16.305 "params": { 00:21:16.305 "name": "nvme0", 00:21:16.305 "trtype": "TCP", 00:21:16.305 "adrfam": "IPv4", 00:21:16.305 "traddr": "127.0.0.1", 00:21:16.305 "trsvcid": "4420", 00:21:16.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:16.305 "prchk_reftag": false, 00:21:16.305 "prchk_guard": false, 00:21:16.305 "ctrlr_loss_timeout_sec": 0, 00:21:16.305 "reconnect_delay_sec": 0, 00:21:16.305 "fast_io_fail_timeout_sec": 0, 00:21:16.305 "psk": "key0", 00:21:16.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:16.305 "hdgst": false, 00:21:16.305 "ddgst": false 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "bdev_nvme_set_hotplug", 00:21:16.305 "params": { 00:21:16.305 "period_us": 100000, 00:21:16.305 "enable": false 00:21:16.305 } 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "method": "bdev_wait_for_examine" 00:21:16.305 } 00:21:16.305 ] 00:21:16.305 }, 00:21:16.305 { 00:21:16.305 "subsystem": "nbd", 00:21:16.305 "config": [] 00:21:16.305 } 00:21:16.305 ] 00:21:16.305 }' 00:21:16.305 13:03:32 keyring_file -- keyring/file.sh@114 -- # killprocess 85269 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85269 ']' 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85269 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85269 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.305 killing process with pid 85269 00:21:16.305 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.305 00:21:16.305 Latency(us) 00:21:16.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.305 =================================================================================================================== 00:21:16.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85269' 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@967 -- # kill 85269 00:21:16.305 13:03:32 keyring_file -- common/autotest_common.sh@972 -- # wait 85269 00:21:16.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:16.565 13:03:32 keyring_file -- keyring/file.sh@117 -- # bperfpid=85507 00:21:16.565 13:03:32 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85507 /var/tmp/bperf.sock 00:21:16.565 13:03:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85507 ']' 00:21:16.565 13:03:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:16.565 13:03:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.565 13:03:32 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:16.565 13:03:32 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:16.565 "subsystems": [ 00:21:16.565 { 00:21:16.565 "subsystem": "keyring", 00:21:16.565 "config": [ 00:21:16.565 { 00:21:16.565 "method": "keyring_file_add_key", 00:21:16.565 "params": { 00:21:16.565 "name": "key0", 00:21:16.565 "path": "/tmp/tmp.FPwa60hEvN" 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "keyring_file_add_key", 00:21:16.566 "params": { 00:21:16.566 "name": "key1", 00:21:16.566 "path": "/tmp/tmp.trSphZJS27" 00:21:16.566 } 00:21:16.566 } 00:21:16.566 ] 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "subsystem": "iobuf", 00:21:16.566 "config": [ 00:21:16.566 { 00:21:16.566 "method": "iobuf_set_options", 00:21:16.566 "params": { 00:21:16.566 "small_pool_count": 8192, 00:21:16.566 "large_pool_count": 1024, 00:21:16.566 "small_bufsize": 8192, 00:21:16.566 "large_bufsize": 135168 00:21:16.566 } 00:21:16.566 } 00:21:16.566 ] 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "subsystem": "sock", 00:21:16.566 "config": [ 00:21:16.566 { 00:21:16.566 "method": "sock_set_default_impl", 00:21:16.566 "params": { 00:21:16.566 "impl_name": "uring" 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "sock_impl_set_options", 00:21:16.566 "params": { 00:21:16.566 "impl_name": "ssl", 00:21:16.566 "recv_buf_size": 4096, 00:21:16.566 "send_buf_size": 4096, 00:21:16.566 "enable_recv_pipe": true, 00:21:16.566 "enable_quickack": false, 00:21:16.566 "enable_placement_id": 0, 00:21:16.566 "enable_zerocopy_send_server": true, 00:21:16.566 "enable_zerocopy_send_client": false, 00:21:16.566 "zerocopy_threshold": 0, 00:21:16.566 "tls_version": 0, 00:21:16.566 "enable_ktls": false 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "sock_impl_set_options", 00:21:16.566 "params": { 00:21:16.566 "impl_name": "posix", 00:21:16.566 "recv_buf_size": 2097152, 00:21:16.566 "send_buf_size": 2097152, 00:21:16.566 "enable_recv_pipe": true, 00:21:16.566 "enable_quickack": false, 00:21:16.566 "enable_placement_id": 0, 00:21:16.566 "enable_zerocopy_send_server": true, 00:21:16.566 "enable_zerocopy_send_client": false, 00:21:16.566 "zerocopy_threshold": 0, 00:21:16.566 "tls_version": 0, 00:21:16.566 "enable_ktls": false 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "sock_impl_set_options", 00:21:16.566 "params": { 00:21:16.566 "impl_name": "uring", 00:21:16.566 "recv_buf_size": 2097152, 00:21:16.566 "send_buf_size": 2097152, 00:21:16.566 "enable_recv_pipe": true, 00:21:16.566 "enable_quickack": false, 00:21:16.566 "enable_placement_id": 0, 00:21:16.566 "enable_zerocopy_send_server": false, 00:21:16.566 "enable_zerocopy_send_client": false, 00:21:16.566 "zerocopy_threshold": 0, 00:21:16.566 "tls_version": 0, 00:21:16.566 "enable_ktls": false 00:21:16.566 } 00:21:16.566 } 00:21:16.566 ] 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "subsystem": "vmd", 00:21:16.566 "config": [] 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "subsystem": "accel", 00:21:16.566 "config": [ 00:21:16.566 { 00:21:16.566 "method": "accel_set_options", 00:21:16.566 "params": { 00:21:16.566 "small_cache_size": 128, 00:21:16.566 "large_cache_size": 16, 00:21:16.566 "task_count": 2048, 00:21:16.566 "sequence_count": 2048, 00:21:16.566 "buf_count": 2048 00:21:16.566 } 00:21:16.566 } 00:21:16.566 ] 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "subsystem": "bdev", 00:21:16.566 "config": [ 00:21:16.566 { 00:21:16.566 "method": "bdev_set_options", 00:21:16.566 "params": { 00:21:16.566 "bdev_io_pool_size": 65535, 00:21:16.566 "bdev_io_cache_size": 256, 00:21:16.566 "bdev_auto_examine": true, 00:21:16.566 "iobuf_small_cache_size": 128, 00:21:16.566 "iobuf_large_cache_size": 16 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "bdev_raid_set_options", 00:21:16.566 "params": { 00:21:16.566 "process_window_size_kb": 1024 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "bdev_iscsi_set_options", 00:21:16.566 "params": { 00:21:16.566 "timeout_sec": 30 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "bdev_nvme_set_options", 00:21:16.566 "params": { 00:21:16.566 "action_on_timeout": "none", 00:21:16.566 "timeout_us": 0, 00:21:16.566 "timeout_admin_us": 0, 00:21:16.566 "keep_alive_timeout_ms": 10000, 00:21:16.566 "arbitration_burst": 0, 00:21:16.566 "low_priority_weight": 0, 00:21:16.566 "medium_priority_weight": 0, 00:21:16.566 "high_priority_weight": 0, 00:21:16.566 "nvme_adminq_poll_period_us": 10000, 00:21:16.566 "nvme_ioq_poll_period_us": 0, 00:21:16.566 "io_queue_requests": 512, 00:21:16.566 "delay_cmd_submit": true, 00:21:16.566 "transport_retry_count": 4, 00:21:16.566 "bdev_retry_count": 3, 00:21:16.566 "transport_ack_timeout": 0, 00:21:16.566 "ctrlr_loss_timeout_sec": 0, 00:21:16.566 "reconnect_delay_sec": 0, 00:21:16.566 "fast_io_fail_timeout_sec": 0, 00:21:16.566 "disable_auto_failback": false, 00:21:16.566 "generate_uuids": false, 00:21:16.566 "transport_tos": 0, 00:21:16.566 "nvme_error_stat": false, 00:21:16.566 "rdma_srq_size": 0, 00:21:16.566 "io_path_stat": false, 00:21:16.566 "allow_accel_sequence": false, 00:21:16.566 "rdma_max_cq_size": 0, 00:21:16.566 "rdma_cm_event_timeout_ms": 0, 00:21:16.566 "dhchap_digests": [ 00:21:16.566 "sha256", 00:21:16.566 "sha384", 00:21:16.566 "sha512" 00:21:16.566 ], 00:21:16.566 "dhchap_dhgroups": [ 00:21:16.566 "null", 00:21:16.566 "ffdhe2048", 00:21:16.566 "ffdhe3072", 00:21:16.566 "ffdhe4096", 00:21:16.566 "ffdhe6144", 00:21:16.566 "ffdhe8192" 00:21:16.566 ] 00:21:16.566 } 00:21:16.566 }, 00:21:16.566 { 00:21:16.566 "method": "bdev_nvme_attach_controller", 00:21:16.566 "params": { 00:21:16.566 "name": "nvme0", 00:21:16.566 "trtype": "TCP", 00:21:16.566 "adrfam": "IPv4", 00:21:16.566 "traddr": "127.0.0.1", 00:21:16.566 "trsvcid": "4420", 00:21:16.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:16.566 "prchk_reftag": false, 00:21:16.566 "prchk_guard": false, 00:21:16.566 "ctrlr_loss_timeout_sec": 0, 00:21:16.566 "reconnect_delay_sec": 0, 00:21:16.566 "fast_io_fail_timeout_sec": 0, 00:21:16.566 "psk": "key0", 00:21:16.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:16.566 "hdgst": false, 00:21:16.566 "ddgst": false 00:21:16.566 } 00:21:16.567 }, 00:21:16.567 { 00:21:16.567 "method": "bdev_nvme_set_hotplug", 00:21:16.567 "params": { 00:21:16.567 "period_us": 100000, 00:21:16.567 "enable": false 00:21:16.567 } 00:21:16.567 }, 00:21:16.567 { 00:21:16.567 "method": "bdev_wait_for_examine" 00:21:16.567 } 00:21:16.567 ] 00:21:16.567 }, 00:21:16.567 { 00:21:16.567 "subsystem": "nbd", 00:21:16.567 "config": [] 00:21:16.567 } 00:21:16.567 ] 00:21:16.567 }' 00:21:16.567 13:03:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:16.567 13:03:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.567 13:03:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:16.567 [2024-07-15 13:03:32.616200] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:16.567 [2024-07-15 13:03:32.616286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85507 ] 00:21:16.825 [2024-07-15 13:03:32.749399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.825 [2024-07-15 13:03:32.856924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.083 [2024-07-15 13:03:32.991796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:17.083 [2024-07-15 13:03:33.042303] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.668 13:03:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.668 13:03:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:17.668 13:03:33 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:17.668 13:03:33 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:17.668 13:03:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:17.927 13:03:33 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:17.927 13:03:33 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:17.927 13:03:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:17.927 13:03:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:17.927 13:03:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:17.927 13:03:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:17.927 13:03:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.186 13:03:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:18.186 13:03:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:18.186 13:03:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:18.186 13:03:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:18.186 13:03:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:18.186 13:03:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.186 13:03:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:18.444 13:03:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:18.444 13:03:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:18.444 13:03:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:18.444 13:03:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:18.444 13:03:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:18.444 13:03:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:18.444 13:03:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FPwa60hEvN /tmp/tmp.trSphZJS27 00:21:18.444 13:03:34 keyring_file -- keyring/file.sh@20 -- # killprocess 85507 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85507 ']' 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85507 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85507 00:21:18.444 killing process with pid 85507 00:21:18.444 Received shutdown signal, test time was about 1.000000 seconds 00:21:18.444 00:21:18.444 Latency(us) 00:21:18.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.444 =================================================================================================================== 00:21:18.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85507' 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@967 -- # kill 85507 00:21:18.444 13:03:34 keyring_file -- common/autotest_common.sh@972 -- # wait 85507 00:21:18.702 13:03:34 keyring_file -- keyring/file.sh@21 -- # killprocess 85252 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85252 ']' 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85252 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85252 00:21:18.702 killing process with pid 85252 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85252' 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@967 -- # kill 85252 00:21:18.702 [2024-07-15 13:03:34.746556] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.702 13:03:34 keyring_file -- common/autotest_common.sh@972 -- # wait 85252 00:21:19.283 00:21:19.283 real 0m14.641s 00:21:19.283 user 0m35.971s 00:21:19.283 sys 0m2.932s 00:21:19.283 13:03:35 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.283 ************************************ 00:21:19.283 END TEST keyring_file 00:21:19.283 13:03:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:19.283 ************************************ 00:21:19.283 13:03:35 -- common/autotest_common.sh@1142 -- # return 0 00:21:19.283 13:03:35 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:21:19.283 13:03:35 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:19.283 13:03:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:19.283 13:03:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.283 13:03:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.283 ************************************ 00:21:19.283 START TEST keyring_linux 00:21:19.283 ************************************ 00:21:19.283 13:03:35 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:19.283 * Looking for test storage... 00:21:19.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:19.283 13:03:35 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:19.283 13:03:35 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=d239ea4f-47fe-42e0-b535-ac0b7a58df88 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.283 13:03:35 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.283 13:03:35 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.283 13:03:35 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.283 13:03:35 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.283 13:03:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.284 13:03:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.284 13:03:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.284 13:03:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:19.284 13:03:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:19.284 13:03:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:19.284 13:03:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:19.284 13:03:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:19.284 13:03:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:19.284 13:03:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:19.284 13:03:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:19.284 /tmp/:spdk-test:key0 00:21:19.284 13:03:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:19.284 13:03:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:19.284 13:03:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:19.544 13:03:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:19.544 13:03:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:19.544 /tmp/:spdk-test:key1 00:21:19.544 13:03:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85621 00:21:19.544 13:03:35 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.544 13:03:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85621 00:21:19.544 13:03:35 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85621 ']' 00:21:19.544 13:03:35 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.544 13:03:35 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.544 13:03:35 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.544 13:03:35 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.544 13:03:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:19.544 [2024-07-15 13:03:35.417343] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:19.544 [2024-07-15 13:03:35.417680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85621 ] 00:21:19.544 [2024-07-15 13:03:35.556150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.802 [2024-07-15 13:03:35.652128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.802 [2024-07-15 13:03:35.706275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:20.369 13:03:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:20.369 [2024-07-15 13:03:36.344561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.369 null0 00:21:20.369 [2024-07-15 13:03:36.376521] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.369 [2024-07-15 13:03:36.376728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.369 13:03:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:20.369 963784258 00:21:20.369 13:03:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:20.369 403071683 00:21:20.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:20.369 13:03:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85639 00:21:20.369 13:03:36 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:20.369 13:03:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85639 /var/tmp/bperf.sock 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85639 ']' 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.369 13:03:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:20.630 [2024-07-15 13:03:36.458379] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:20.630 [2024-07-15 13:03:36.458683] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85639 ] 00:21:20.630 [2024-07-15 13:03:36.600138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.889 [2024-07-15 13:03:36.701948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.457 13:03:37 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.457 13:03:37 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:21.457 13:03:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:21.457 13:03:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:21.716 13:03:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:21.716 13:03:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:21.975 [2024-07-15 13:03:37.876572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:21.975 13:03:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:21.975 13:03:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:22.233 [2024-07-15 13:03:38.146932] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.233 nvme0n1 00:21:22.233 13:03:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:22.233 13:03:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:22.233 13:03:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:22.233 13:03:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:22.233 13:03:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:22.233 13:03:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:22.492 13:03:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:22.492 13:03:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:22.492 13:03:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:22.492 13:03:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:22.492 13:03:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:22.492 13:03:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:22.492 13:03:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:22.751 13:03:38 keyring_linux -- keyring/linux.sh@25 -- # sn=963784258 00:21:22.751 13:03:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:22.751 13:03:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:22.751 13:03:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 963784258 == \9\6\3\7\8\4\2\5\8 ]] 00:21:22.751 13:03:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 963784258 00:21:22.751 13:03:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:22.751 13:03:38 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:23.009 Running I/O for 1 seconds... 00:21:23.946 00:21:23.947 Latency(us) 00:21:23.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.947 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:23.947 nvme0n1 : 1.01 13950.46 54.49 0.00 0.00 9131.46 6434.44 15728.64 00:21:23.947 =================================================================================================================== 00:21:23.947 Total : 13950.46 54.49 0.00 0.00 9131.46 6434.44 15728.64 00:21:23.947 0 00:21:23.947 13:03:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:23.947 13:03:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:24.206 13:03:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:24.206 13:03:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:24.206 13:03:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:24.206 13:03:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:24.206 13:03:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.206 13:03:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:24.465 13:03:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:24.465 13:03:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:24.465 13:03:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:24.465 13:03:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:24.465 13:03:40 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:21:24.465 13:03:40 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:24.465 13:03:40 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:24.465 13:03:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:24.465 13:03:40 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:24.465 13:03:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:24.465 13:03:40 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:24.465 13:03:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:24.725 [2024-07-15 13:03:40.580494] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:24.725 [2024-07-15 13:03:40.580772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cd460 (107): Transport endpoint is not connected 00:21:24.725 [2024-07-15 13:03:40.581761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cd460 (9): Bad file descriptor 00:21:24.725 [2024-07-15 13:03:40.582757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:24.725 [2024-07-15 13:03:40.582780] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:24.725 [2024-07-15 13:03:40.582791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:24.725 request: 00:21:24.725 { 00:21:24.725 "name": "nvme0", 00:21:24.725 "trtype": "tcp", 00:21:24.725 "traddr": "127.0.0.1", 00:21:24.725 "adrfam": "ipv4", 00:21:24.725 "trsvcid": "4420", 00:21:24.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:24.725 "prchk_reftag": false, 00:21:24.725 "prchk_guard": false, 00:21:24.725 "hdgst": false, 00:21:24.725 "ddgst": false, 00:21:24.725 "psk": ":spdk-test:key1", 00:21:24.725 "method": "bdev_nvme_attach_controller", 00:21:24.725 "req_id": 1 00:21:24.725 } 00:21:24.725 Got JSON-RPC error response 00:21:24.725 response: 00:21:24.725 { 00:21:24.725 "code": -5, 00:21:24.725 "message": "Input/output error" 00:21:24.725 } 00:21:24.725 13:03:40 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:21:24.725 13:03:40 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:24.725 13:03:40 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:24.725 13:03:40 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:24.725 13:03:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:24.725 13:03:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:24.725 13:03:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:24.725 13:03:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:24.725 13:03:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:24.725 13:03:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:24.725 13:03:40 keyring_linux -- keyring/linux.sh@33 -- # sn=963784258 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 963784258 00:21:24.726 1 links removed 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@33 -- # sn=403071683 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 403071683 00:21:24.726 1 links removed 00:21:24.726 13:03:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85639 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85639 ']' 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85639 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85639 00:21:24.726 killing process with pid 85639 00:21:24.726 Received shutdown signal, test time was about 1.000000 seconds 00:21:24.726 00:21:24.726 Latency(us) 00:21:24.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.726 =================================================================================================================== 00:21:24.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85639' 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 85639 00:21:24.726 13:03:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 85639 00:21:25.017 13:03:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85621 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85621 ']' 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85621 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85621 00:21:25.017 killing process with pid 85621 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85621' 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 85621 00:21:25.017 13:03:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 85621 00:21:25.275 00:21:25.275 real 0m6.109s 00:21:25.275 user 0m11.717s 00:21:25.275 sys 0m1.563s 00:21:25.275 13:03:41 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:25.275 13:03:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:25.275 ************************************ 00:21:25.275 END TEST keyring_linux 00:21:25.275 ************************************ 00:21:25.275 13:03:41 -- common/autotest_common.sh@1142 -- # return 0 00:21:25.275 13:03:41 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:25.275 13:03:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:25.275 13:03:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:25.275 13:03:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:25.275 13:03:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:25.275 13:03:41 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:21:25.275 13:03:41 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:21:25.275 13:03:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.275 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:21:25.275 13:03:41 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:21:25.275 13:03:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:25.275 13:03:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:25.275 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.175 INFO: APP EXITING 00:21:27.175 INFO: killing all VMs 00:21:27.175 INFO: killing vhost app 00:21:27.175 INFO: EXIT DONE 00:21:27.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.692 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:27.692 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:28.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:28.258 Cleaning 00:21:28.258 Removing: /var/run/dpdk/spdk0/config 00:21:28.258 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:28.258 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:28.258 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:28.258 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:28.258 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:28.258 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:28.258 Removing: /var/run/dpdk/spdk1/config 00:21:28.258 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:28.258 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:28.258 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:28.258 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:28.258 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:28.258 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:28.258 Removing: /var/run/dpdk/spdk2/config 00:21:28.258 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:28.258 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:28.258 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:28.258 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:28.258 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:28.258 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:28.258 Removing: /var/run/dpdk/spdk3/config 00:21:28.258 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:28.258 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:28.258 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:28.258 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:28.258 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:28.258 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:28.258 Removing: /var/run/dpdk/spdk4/config 00:21:28.258 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:28.258 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:28.258 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:28.258 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:28.258 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:28.258 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:28.258 Removing: /dev/shm/nvmf_trace.0 00:21:28.517 Removing: /dev/shm/spdk_tgt_trace.pid58677 00:21:28.517 Removing: /var/run/dpdk/spdk0 00:21:28.517 Removing: /var/run/dpdk/spdk1 00:21:28.517 Removing: /var/run/dpdk/spdk2 00:21:28.517 Removing: /var/run/dpdk/spdk3 00:21:28.517 Removing: /var/run/dpdk/spdk4 00:21:28.517 Removing: /var/run/dpdk/spdk_pid58532 00:21:28.517 Removing: /var/run/dpdk/spdk_pid58677 00:21:28.517 Removing: /var/run/dpdk/spdk_pid58876 00:21:28.517 Removing: /var/run/dpdk/spdk_pid58957 00:21:28.517 Removing: /var/run/dpdk/spdk_pid58990 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59094 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59112 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59230 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59426 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59572 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59637 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59707 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59798 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59875 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59908 00:21:28.517 Removing: /var/run/dpdk/spdk_pid59944 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60005 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60105 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60543 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60595 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60633 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60649 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60716 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60732 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60804 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60821 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60866 00:21:28.517 Removing: /var/run/dpdk/spdk_pid60884 00:21:28.518 Removing: /var/run/dpdk/spdk_pid60930 00:21:28.518 Removing: /var/run/dpdk/spdk_pid60948 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61076 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61110 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61186 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61230 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61254 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61313 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61353 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61382 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61422 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61451 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61491 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61520 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61560 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61589 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61629 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61664 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61698 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61733 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61767 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61802 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61836 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61873 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61910 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61948 00:21:28.518 Removing: /var/run/dpdk/spdk_pid61982 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62018 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62088 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62183 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62491 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62503 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62534 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62553 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62574 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62593 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62611 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62622 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62647 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62660 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62681 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62700 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62719 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62735 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62755 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62774 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62789 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62808 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62826 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62843 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62879 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62892 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62922 00:21:28.518 Removing: /var/run/dpdk/spdk_pid62986 00:21:28.518 Removing: /var/run/dpdk/spdk_pid63020 00:21:28.518 Removing: /var/run/dpdk/spdk_pid63024 00:21:28.518 Removing: /var/run/dpdk/spdk_pid63058 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63067 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63075 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63124 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63132 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63166 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63181 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63196 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63200 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63215 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63219 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63234 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63248 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63272 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63304 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63308 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63342 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63352 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63359 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63405 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63417 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63443 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63451 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63458 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63471 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63479 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63486 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63494 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63501 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63575 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63623 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63733 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63770 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63817 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63826 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63848 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63868 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63905 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63916 00:21:28.776 Removing: /var/run/dpdk/spdk_pid63985 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64012 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64056 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64139 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64196 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64225 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64316 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64359 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64398 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64622 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64714 00:21:28.776 Removing: /var/run/dpdk/spdk_pid64748 00:21:28.776 Removing: /var/run/dpdk/spdk_pid65069 00:21:28.776 Removing: /var/run/dpdk/spdk_pid65107 00:21:28.776 Removing: /var/run/dpdk/spdk_pid65401 00:21:28.776 Removing: /var/run/dpdk/spdk_pid65810 00:21:28.776 Removing: /var/run/dpdk/spdk_pid66081 00:21:28.776 Removing: /var/run/dpdk/spdk_pid66862 00:21:28.776 Removing: /var/run/dpdk/spdk_pid67681 00:21:28.776 Removing: /var/run/dpdk/spdk_pid67797 00:21:28.776 Removing: /var/run/dpdk/spdk_pid67865 00:21:28.776 Removing: /var/run/dpdk/spdk_pid69121 00:21:28.776 Removing: /var/run/dpdk/spdk_pid69327 00:21:28.776 Removing: /var/run/dpdk/spdk_pid72701 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73002 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73107 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73245 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73268 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73296 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73323 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73420 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73550 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73706 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73782 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73975 00:21:28.776 Removing: /var/run/dpdk/spdk_pid74058 00:21:28.776 Removing: /var/run/dpdk/spdk_pid74151 00:21:28.776 Removing: /var/run/dpdk/spdk_pid74455 00:21:28.776 Removing: /var/run/dpdk/spdk_pid74840 00:21:28.776 Removing: /var/run/dpdk/spdk_pid74846 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75119 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75134 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75152 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75183 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75188 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75489 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75532 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75821 00:21:28.776 Removing: /var/run/dpdk/spdk_pid76013 00:21:28.776 Removing: /var/run/dpdk/spdk_pid76397 00:21:28.776 Removing: /var/run/dpdk/spdk_pid76906 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77722 00:21:29.035 Removing: /var/run/dpdk/spdk_pid78301 00:21:29.035 Removing: /var/run/dpdk/spdk_pid78307 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80211 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80267 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80327 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80382 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80503 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80562 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80618 00:21:29.035 Removing: /var/run/dpdk/spdk_pid80675 00:21:29.035 Removing: /var/run/dpdk/spdk_pid81001 00:21:29.035 Removing: /var/run/dpdk/spdk_pid82155 00:21:29.035 Removing: /var/run/dpdk/spdk_pid82295 00:21:29.035 Removing: /var/run/dpdk/spdk_pid82538 00:21:29.035 Removing: /var/run/dpdk/spdk_pid83090 00:21:29.035 Removing: /var/run/dpdk/spdk_pid83244 00:21:29.035 Removing: /var/run/dpdk/spdk_pid83401 00:21:29.035 Removing: /var/run/dpdk/spdk_pid83498 00:21:29.035 Removing: /var/run/dpdk/spdk_pid83670 00:21:29.035 Removing: /var/run/dpdk/spdk_pid83779 00:21:29.035 Removing: /var/run/dpdk/spdk_pid84431 00:21:29.035 Removing: /var/run/dpdk/spdk_pid84466 00:21:29.035 Removing: /var/run/dpdk/spdk_pid84502 00:21:29.035 Removing: /var/run/dpdk/spdk_pid84756 00:21:29.035 Removing: /var/run/dpdk/spdk_pid84791 00:21:29.035 Removing: /var/run/dpdk/spdk_pid84825 00:21:29.035 Removing: /var/run/dpdk/spdk_pid85252 00:21:29.035 Removing: /var/run/dpdk/spdk_pid85269 00:21:29.035 Removing: /var/run/dpdk/spdk_pid85507 00:21:29.035 Removing: /var/run/dpdk/spdk_pid85621 00:21:29.035 Removing: /var/run/dpdk/spdk_pid85639 00:21:29.035 Clean 00:21:29.035 13:03:44 -- common/autotest_common.sh@1451 -- # return 0 00:21:29.035 13:03:44 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:21:29.035 13:03:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.035 13:03:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.035 13:03:45 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:21:29.035 13:03:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.035 13:03:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.035 13:03:45 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:29.035 13:03:45 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:29.035 13:03:45 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:29.035 13:03:45 -- spdk/autotest.sh@391 -- # hash lcov 00:21:29.035 13:03:45 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:29.035 13:03:45 -- spdk/autotest.sh@393 -- # hostname 00:21:29.035 13:03:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:29.293 geninfo: WARNING: invalid characters removed from testname! 00:21:55.832 13:04:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:56.398 13:04:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.033 13:04:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:01.567 13:04:17 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.098 13:04:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:07.394 13:04:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.928 13:04:25 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:09.928 13:04:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:09.928 13:04:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:09.928 13:04:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.928 13:04:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.929 13:04:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.929 13:04:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.929 13:04:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.929 13:04:25 -- paths/export.sh@5 -- $ export PATH 00:22:09.929 13:04:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.929 13:04:25 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:09.929 13:04:25 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:09.929 13:04:25 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721048665.XXXXXX 00:22:09.929 13:04:25 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721048665.GntAcb 00:22:09.929 13:04:25 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:09.929 13:04:25 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:09.929 13:04:25 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:09.929 13:04:25 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:09.929 13:04:25 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:09.929 13:04:25 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:09.929 13:04:25 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:09.929 13:04:25 -- common/autotest_common.sh@10 -- $ set +x 00:22:09.929 13:04:25 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:09.929 13:04:25 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:09.929 13:04:25 -- pm/common@17 -- $ local monitor 00:22:09.929 13:04:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:09.929 13:04:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:09.929 13:04:25 -- pm/common@25 -- $ sleep 1 00:22:09.929 13:04:25 -- pm/common@21 -- $ date +%s 00:22:09.929 13:04:25 -- pm/common@21 -- $ date +%s 00:22:09.929 13:04:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721048665 00:22:09.929 13:04:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721048665 00:22:09.929 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721048665_collect-vmstat.pm.log 00:22:09.929 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721048665_collect-cpu-load.pm.log 00:22:10.912 13:04:26 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:10.912 13:04:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:10.912 13:04:26 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:10.912 13:04:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:10.912 13:04:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:10.912 13:04:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:10.912 13:04:26 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:10.912 13:04:26 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:10.912 13:04:26 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:10.912 13:04:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:10.912 13:04:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:10.912 13:04:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:10.912 13:04:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:10.912 13:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:10.912 13:04:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:10.912 13:04:26 -- pm/common@44 -- $ pid=87393 00:22:10.912 13:04:26 -- pm/common@50 -- $ kill -TERM 87393 00:22:10.912 13:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:10.912 13:04:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:10.912 13:04:26 -- pm/common@44 -- $ pid=87395 00:22:10.912 13:04:26 -- pm/common@50 -- $ kill -TERM 87395 00:22:10.912 + [[ -n 5097 ]] 00:22:10.912 + sudo kill 5097 00:22:10.922 [Pipeline] } 00:22:10.942 [Pipeline] // timeout 00:22:10.948 [Pipeline] } 00:22:10.967 [Pipeline] // stage 00:22:10.973 [Pipeline] } 00:22:10.991 [Pipeline] // catchError 00:22:11.002 [Pipeline] stage 00:22:11.004 [Pipeline] { (Stop VM) 00:22:11.047 [Pipeline] sh 00:22:11.326 + vagrant halt 00:22:14.621 ==> default: Halting domain... 00:22:21.216 [Pipeline] sh 00:22:21.496 + vagrant destroy -f 00:22:25.684 ==> default: Removing domain... 00:22:25.698 [Pipeline] sh 00:22:25.978 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/output 00:22:25.987 [Pipeline] } 00:22:26.008 [Pipeline] // stage 00:22:26.014 [Pipeline] } 00:22:26.030 [Pipeline] // dir 00:22:26.036 [Pipeline] } 00:22:26.053 [Pipeline] // wrap 00:22:26.060 [Pipeline] } 00:22:26.072 [Pipeline] // catchError 00:22:26.081 [Pipeline] stage 00:22:26.083 [Pipeline] { (Epilogue) 00:22:26.097 [Pipeline] sh 00:22:26.378 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:32.946 [Pipeline] catchError 00:22:32.948 [Pipeline] { 00:22:32.960 [Pipeline] sh 00:22:33.262 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:33.262 Artifacts sizes are good 00:22:33.270 [Pipeline] } 00:22:33.286 [Pipeline] // catchError 00:22:33.298 [Pipeline] archiveArtifacts 00:22:33.304 Archiving artifacts 00:22:33.470 [Pipeline] cleanWs 00:22:33.480 [WS-CLEANUP] Deleting project workspace... 00:22:33.480 [WS-CLEANUP] Deferred wipeout is used... 00:22:33.485 [WS-CLEANUP] done 00:22:33.486 [Pipeline] } 00:22:33.500 [Pipeline] // stage 00:22:33.505 [Pipeline] } 00:22:33.518 [Pipeline] // node 00:22:33.522 [Pipeline] End of Pipeline 00:22:33.645 Finished: SUCCESS